00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2005 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3271 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.092 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.093 using credential 00000000-0000-0000-0000-000000000002 00:00:00.095 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.149 Fetching changes from the remote Git repository 00:00:00.153 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.204 Using shallow fetch with depth 1 00:00:00.204 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.204 > git --version # timeout=10 00:00:00.249 > git --version # 'git version 2.39.2' 00:00:00.249 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.273 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.273 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.467 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.478 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.491 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.491 > git config core.sparsecheckout # timeout=10 00:00:04.503 > git read-tree -mu HEAD # timeout=10 00:00:04.520 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.543 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.543 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.685 [Pipeline] Start of Pipeline 00:00:04.699 [Pipeline] library 00:00:04.701 Loading library shm_lib@master 00:00:04.701 Library shm_lib@master is cached. Copying from home. 00:00:04.716 [Pipeline] node 00:00:04.723 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.725 [Pipeline] { 00:00:04.734 [Pipeline] catchError 00:00:04.735 [Pipeline] { 00:00:04.746 [Pipeline] wrap 00:00:04.754 [Pipeline] { 00:00:04.761 [Pipeline] stage 00:00:04.763 [Pipeline] { (Prologue) 00:00:04.984 [Pipeline] sh 00:00:05.269 + logger -p user.info -t JENKINS-CI 00:00:05.289 [Pipeline] echo 00:00:05.291 Node: GP11 00:00:05.298 [Pipeline] sh 00:00:05.598 [Pipeline] setCustomBuildProperty 00:00:05.609 [Pipeline] echo 00:00:05.610 Cleanup processes 00:00:05.614 [Pipeline] sh 00:00:05.892 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.892 3811678 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.904 [Pipeline] sh 00:00:06.183 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.183 ++ grep -v 'sudo pgrep' 00:00:06.183 ++ awk '{print $1}' 00:00:06.183 + sudo kill -9 00:00:06.183 + true 00:00:06.199 [Pipeline] cleanWs 00:00:06.209 [WS-CLEANUP] Deleting project workspace... 00:00:06.209 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.215 [WS-CLEANUP] done 00:00:06.220 [Pipeline] setCustomBuildProperty 00:00:06.234 [Pipeline] sh 00:00:06.515 + sudo git config --global --replace-all safe.directory '*' 00:00:06.617 [Pipeline] httpRequest 00:00:06.651 [Pipeline] echo 00:00:06.652 Sorcerer 10.211.164.101 is alive 00:00:06.658 [Pipeline] httpRequest 00:00:06.663 HttpMethod: GET 00:00:06.663 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.664 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.678 Response Code: HTTP/1.1 200 OK 00:00:06.679 Success: Status code 200 is in the accepted range: 200,404 00:00:06.679 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.156 [Pipeline] sh 00:00:08.437 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.452 [Pipeline] httpRequest 00:00:08.477 [Pipeline] echo 00:00:08.478 Sorcerer 10.211.164.101 is alive 00:00:08.485 [Pipeline] httpRequest 00:00:08.490 HttpMethod: GET 00:00:08.490 URL: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:08.490 Sending request to url: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:08.506 Response Code: HTTP/1.1 200 OK 00:00:08.506 Success: Status code 200 is in the accepted range: 200,404 00:00:08.507 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:58.935 [Pipeline] sh 00:00:59.221 + tar --no-same-owner -xf spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:01:02.520 [Pipeline] sh 00:01:02.804 + git -C spdk log --oneline -n5 00:01:02.804 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:01:02.804 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:01:02.804 2d30d9f83 accel: introduce tasks in sequence limit 00:01:02.804 2728651ee accel: adjust task per ch define name 00:01:02.804 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:01:02.824 [Pipeline] withCredentials 00:01:02.835 > git --version # timeout=10 00:01:02.850 > git --version # 'git version 2.39.2' 00:01:02.870 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:02.873 [Pipeline] { 00:01:02.882 [Pipeline] retry 00:01:02.884 [Pipeline] { 00:01:02.905 [Pipeline] sh 00:01:03.210 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:04.607 [Pipeline] } 00:01:04.633 [Pipeline] // retry 00:01:04.641 [Pipeline] } 00:01:04.665 [Pipeline] // withCredentials 00:01:04.678 [Pipeline] httpRequest 00:01:04.697 [Pipeline] echo 00:01:04.700 Sorcerer 10.211.164.101 is alive 00:01:04.712 [Pipeline] httpRequest 00:01:04.718 HttpMethod: GET 00:01:04.718 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:04.719 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:04.725 Response Code: HTTP/1.1 200 OK 00:01:04.726 Success: Status code 200 is in the accepted range: 200,404 00:01:04.726 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:32.357 [Pipeline] sh 00:01:32.642 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:34.565 [Pipeline] sh 00:01:34.851 + git -C dpdk log --oneline -n5 00:01:34.851 caf0f5d395 version: 22.11.4 00:01:34.851 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:34.851 dc9c799c7d vhost: fix missing spinlock unlock 00:01:34.851 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:34.851 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:34.861 [Pipeline] } 00:01:34.879 [Pipeline] // stage 00:01:34.888 [Pipeline] stage 00:01:34.890 [Pipeline] { (Prepare) 00:01:34.910 [Pipeline] writeFile 00:01:34.945 [Pipeline] sh 00:01:35.235 + logger -p user.info -t JENKINS-CI 00:01:35.246 [Pipeline] sh 00:01:35.523 + logger -p user.info -t JENKINS-CI 00:01:35.535 [Pipeline] sh 00:01:35.814 + cat autorun-spdk.conf 00:01:35.814 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.814 SPDK_TEST_NVMF=1 00:01:35.814 SPDK_TEST_NVME_CLI=1 00:01:35.814 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.814 SPDK_TEST_NVMF_NICS=e810 00:01:35.814 SPDK_TEST_VFIOUSER=1 00:01:35.814 SPDK_RUN_UBSAN=1 00:01:35.814 NET_TYPE=phy 00:01:35.814 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:35.814 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.820 RUN_NIGHTLY=1 00:01:35.827 [Pipeline] readFile 00:01:35.852 [Pipeline] withEnv 00:01:35.854 [Pipeline] { 00:01:35.868 [Pipeline] sh 00:01:36.150 + set -ex 00:01:36.150 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:36.150 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:36.151 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.151 ++ SPDK_TEST_NVMF=1 00:01:36.151 ++ SPDK_TEST_NVME_CLI=1 00:01:36.151 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.151 ++ SPDK_TEST_NVMF_NICS=e810 00:01:36.151 ++ SPDK_TEST_VFIOUSER=1 00:01:36.151 ++ SPDK_RUN_UBSAN=1 00:01:36.151 ++ NET_TYPE=phy 00:01:36.151 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:36.151 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:36.151 ++ RUN_NIGHTLY=1 00:01:36.151 + case $SPDK_TEST_NVMF_NICS in 00:01:36.151 + DRIVERS=ice 00:01:36.151 + [[ tcp == \r\d\m\a ]] 00:01:36.151 + [[ -n ice ]] 00:01:36.151 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:36.151 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:36.151 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:36.151 rmmod: ERROR: Module irdma is not currently loaded 00:01:36.151 rmmod: ERROR: Module i40iw is not currently loaded 00:01:36.151 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:36.151 + true 00:01:36.151 + for D in $DRIVERS 00:01:36.151 + sudo modprobe ice 00:01:36.151 + exit 0 00:01:36.162 [Pipeline] } 00:01:36.184 [Pipeline] // withEnv 00:01:36.192 [Pipeline] } 00:01:36.214 [Pipeline] // stage 00:01:36.224 [Pipeline] catchError 00:01:36.226 [Pipeline] { 00:01:36.240 [Pipeline] timeout 00:01:36.240 Timeout set to expire in 50 min 00:01:36.241 [Pipeline] { 00:01:36.257 [Pipeline] stage 00:01:36.259 [Pipeline] { (Tests) 00:01:36.276 [Pipeline] sh 00:01:36.557 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.558 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.558 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.558 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:36.558 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.558 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:36.558 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:36.558 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:36.558 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:36.558 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:36.558 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:36.558 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.558 + source /etc/os-release 00:01:36.558 ++ NAME='Fedora Linux' 00:01:36.558 ++ VERSION='38 (Cloud Edition)' 00:01:36.558 ++ ID=fedora 00:01:36.558 ++ VERSION_ID=38 00:01:36.558 ++ VERSION_CODENAME= 00:01:36.558 ++ PLATFORM_ID=platform:f38 00:01:36.558 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:36.558 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:36.558 ++ LOGO=fedora-logo-icon 00:01:36.558 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:36.558 ++ HOME_URL=https://fedoraproject.org/ 00:01:36.558 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:36.558 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:36.558 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:36.558 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:36.558 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:36.558 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:36.558 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:36.558 ++ SUPPORT_END=2024-05-14 00:01:36.558 ++ VARIANT='Cloud Edition' 00:01:36.558 ++ VARIANT_ID=cloud 00:01:36.558 + uname -a 00:01:36.558 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:36.558 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:37.493 Hugepages 00:01:37.493 node hugesize free / total 00:01:37.493 node0 1048576kB 0 / 0 00:01:37.493 node0 2048kB 0 / 0 00:01:37.493 node1 1048576kB 0 / 0 00:01:37.493 node1 2048kB 0 / 0 00:01:37.493 00:01:37.493 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:37.493 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:37.493 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:37.493 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:37.493 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:37.493 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:37.493 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:37.493 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:37.493 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:37.493 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:37.493 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:37.493 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:37.493 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:37.493 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:37.493 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:37.493 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:37.493 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:37.493 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:37.493 + rm -f /tmp/spdk-ld-path 00:01:37.493 + source autorun-spdk.conf 00:01:37.493 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.493 ++ SPDK_TEST_NVMF=1 00:01:37.493 ++ SPDK_TEST_NVME_CLI=1 00:01:37.493 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.493 ++ SPDK_TEST_NVMF_NICS=e810 00:01:37.493 ++ SPDK_TEST_VFIOUSER=1 00:01:37.493 ++ SPDK_RUN_UBSAN=1 00:01:37.493 ++ NET_TYPE=phy 00:01:37.493 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:37.493 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:37.493 ++ RUN_NIGHTLY=1 00:01:37.493 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:37.493 + [[ -n '' ]] 00:01:37.493 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:37.493 + for M in /var/spdk/build-*-manifest.txt 00:01:37.493 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:37.493 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.493 + for M in /var/spdk/build-*-manifest.txt 00:01:37.493 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:37.493 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.493 ++ uname 00:01:37.493 + [[ Linux == \L\i\n\u\x ]] 00:01:37.493 + sudo dmesg -T 00:01:37.751 + sudo dmesg --clear 00:01:37.751 + dmesg_pid=3813003 00:01:37.751 + [[ Fedora Linux == FreeBSD ]] 00:01:37.751 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.751 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.751 + sudo dmesg -Tw 00:01:37.751 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:37.751 + [[ -x /usr/src/fio-static/fio ]] 00:01:37.751 + export FIO_BIN=/usr/src/fio-static/fio 00:01:37.751 + FIO_BIN=/usr/src/fio-static/fio 00:01:37.751 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:37.751 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:37.751 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:37.751 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.751 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.751 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:37.751 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.751 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.751 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.751 Test configuration: 00:01:37.751 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.751 SPDK_TEST_NVMF=1 00:01:37.751 SPDK_TEST_NVME_CLI=1 00:01:37.751 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.751 SPDK_TEST_NVMF_NICS=e810 00:01:37.751 SPDK_TEST_VFIOUSER=1 00:01:37.751 SPDK_RUN_UBSAN=1 00:01:37.751 NET_TYPE=phy 00:01:37.751 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:37.751 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:37.751 RUN_NIGHTLY=1 20:07:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:37.751 20:07:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:37.751 20:07:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:37.751 20:07:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:37.752 20:07:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.752 20:07:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.752 20:07:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.752 20:07:16 -- paths/export.sh@5 -- $ export PATH 00:01:37.752 20:07:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.752 20:07:16 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:37.752 20:07:16 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:37.752 20:07:16 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721066836.XXXXXX 00:01:37.752 20:07:16 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721066836.6IOTzi 00:01:37.752 20:07:16 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:37.752 20:07:16 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:01:37.752 20:07:16 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:37.752 20:07:16 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:37.752 20:07:16 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:37.752 20:07:16 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:37.752 20:07:16 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:37.752 20:07:16 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:37.752 20:07:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.752 20:07:16 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:37.752 20:07:16 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:37.752 20:07:16 -- pm/common@17 -- $ local monitor 00:01:37.752 20:07:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.752 20:07:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.752 20:07:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.752 20:07:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.752 20:07:16 -- pm/common@21 -- $ date +%s 00:01:37.752 20:07:16 -- pm/common@21 -- $ date +%s 00:01:37.752 20:07:16 -- pm/common@25 -- $ sleep 1 00:01:37.752 20:07:16 -- pm/common@21 -- $ date +%s 00:01:37.752 20:07:16 -- pm/common@21 -- $ date +%s 00:01:37.752 20:07:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721066836 00:01:37.752 20:07:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721066836 00:01:37.752 20:07:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721066836 00:01:37.752 20:07:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721066836 00:01:37.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721066836_collect-vmstat.pm.log 00:01:37.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721066836_collect-cpu-load.pm.log 00:01:37.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721066836_collect-cpu-temp.pm.log 00:01:37.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721066836_collect-bmc-pm.bmc.pm.log 00:01:38.689 20:07:17 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:38.689 20:07:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:38.689 20:07:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:38.689 20:07:17 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:38.689 20:07:17 -- spdk/autobuild.sh@16 -- $ date -u 00:01:38.689 Mon Jul 15 06:07:17 PM UTC 2024 00:01:38.689 20:07:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:38.689 v24.09-pre-209-ga95bbf233 00:01:38.689 20:07:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:38.689 20:07:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:38.689 20:07:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:38.689 20:07:17 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:38.689 20:07:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:38.689 20:07:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.689 ************************************ 00:01:38.689 START TEST ubsan 00:01:38.689 ************************************ 00:01:38.689 20:07:17 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:38.689 using ubsan 00:01:38.689 00:01:38.689 real 0m0.000s 00:01:38.689 user 0m0.000s 00:01:38.689 sys 0m0.000s 00:01:38.689 20:07:17 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:38.689 20:07:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.689 ************************************ 00:01:38.689 END TEST ubsan 00:01:38.689 ************************************ 00:01:38.689 20:07:17 -- common/autotest_common.sh@1142 -- $ return 0 00:01:38.689 20:07:17 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:38.689 20:07:17 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:38.689 20:07:17 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:38.689 20:07:17 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:38.689 20:07:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:38.689 20:07:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.689 ************************************ 00:01:38.689 START TEST build_native_dpdk 00:01:38.689 ************************************ 00:01:38.689 20:07:17 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:38.689 20:07:17 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:38.689 caf0f5d395 version: 22.11.4 00:01:38.689 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:38.689 dc9c799c7d vhost: fix missing spinlock unlock 00:01:38.689 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:38.689 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:38.947 20:07:17 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:38.947 patching file config/rte_config.h 00:01:38.947 Hunk #1 succeeded at 60 (offset 1 line). 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:38.947 20:07:17 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:38.948 20:07:17 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:38.948 20:07:17 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:38.948 20:07:17 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:43.131 The Meson build system 00:01:43.131 Version: 1.3.1 00:01:43.131 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:43.131 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:43.131 Build type: native build 00:01:43.131 Program cat found: YES (/usr/bin/cat) 00:01:43.131 Project name: DPDK 00:01:43.131 Project version: 22.11.4 00:01:43.131 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:43.132 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:43.132 Host machine cpu family: x86_64 00:01:43.132 Host machine cpu: x86_64 00:01:43.132 Message: ## Building in Developer Mode ## 00:01:43.132 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:43.132 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:43.132 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:43.132 Program objdump found: YES (/usr/bin/objdump) 00:01:43.132 Program python3 found: YES (/usr/bin/python3) 00:01:43.132 Program cat found: YES (/usr/bin/cat) 00:01:43.132 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:43.132 Checking for size of "void *" : 8 00:01:43.132 Checking for size of "void *" : 8 (cached) 00:01:43.132 Library m found: YES 00:01:43.132 Library numa found: YES 00:01:43.132 Has header "numaif.h" : YES 00:01:43.132 Library fdt found: NO 00:01:43.132 Library execinfo found: NO 00:01:43.132 Has header "execinfo.h" : YES 00:01:43.132 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:43.132 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:43.132 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:43.132 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:43.132 Run-time dependency openssl found: YES 3.0.9 00:01:43.132 Run-time dependency libpcap found: YES 1.10.4 00:01:43.132 Has header "pcap.h" with dependency libpcap: YES 00:01:43.132 Compiler for C supports arguments -Wcast-qual: YES 00:01:43.132 Compiler for C supports arguments -Wdeprecated: YES 00:01:43.132 Compiler for C supports arguments -Wformat: YES 00:01:43.132 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:43.132 Compiler for C supports arguments -Wformat-security: NO 00:01:43.132 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.132 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:43.132 Compiler for C supports arguments -Wnested-externs: YES 00:01:43.132 Compiler for C supports arguments -Wold-style-definition: YES 00:01:43.132 Compiler for C supports arguments -Wpointer-arith: YES 00:01:43.132 Compiler for C supports arguments -Wsign-compare: YES 00:01:43.132 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:43.132 Compiler for C supports arguments -Wundef: YES 00:01:43.132 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.132 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:43.132 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:43.132 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.132 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:43.132 Compiler for C supports arguments -mavx512f: YES 00:01:43.132 Checking if "AVX512 checking" compiles: YES 00:01:43.132 Fetching value of define "__SSE4_2__" : 1 00:01:43.132 Fetching value of define "__AES__" : 1 00:01:43.132 Fetching value of define "__AVX__" : 1 00:01:43.132 Fetching value of define "__AVX2__" : (undefined) 00:01:43.132 Fetching value of define "__AVX512BW__" : (undefined) 00:01:43.132 Fetching value of define "__AVX512CD__" : (undefined) 00:01:43.132 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:43.132 Fetching value of define "__AVX512F__" : (undefined) 00:01:43.132 Fetching value of define "__AVX512VL__" : (undefined) 00:01:43.132 Fetching value of define "__PCLMUL__" : 1 00:01:43.132 Fetching value of define "__RDRND__" : 1 00:01:43.132 Fetching value of define "__RDSEED__" : (undefined) 00:01:43.132 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:43.132 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:43.132 Message: lib/kvargs: Defining dependency "kvargs" 00:01:43.132 Message: lib/telemetry: Defining dependency "telemetry" 00:01:43.132 Checking for function "getentropy" : YES 00:01:43.132 Message: lib/eal: Defining dependency "eal" 00:01:43.132 Message: lib/ring: Defining dependency "ring" 00:01:43.132 Message: lib/rcu: Defining dependency "rcu" 00:01:43.132 Message: lib/mempool: Defining dependency "mempool" 00:01:43.132 Message: lib/mbuf: Defining dependency "mbuf" 00:01:43.132 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:43.132 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.132 Compiler for C supports arguments -mpclmul: YES 00:01:43.132 Compiler for C supports arguments -maes: YES 00:01:43.132 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:43.132 Compiler for C supports arguments -mavx512bw: YES 00:01:43.132 Compiler for C supports arguments -mavx512dq: YES 00:01:43.132 Compiler for C supports arguments -mavx512vl: YES 00:01:43.132 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:43.132 Compiler for C supports arguments -mavx2: YES 00:01:43.132 Compiler for C supports arguments -mavx: YES 00:01:43.132 Message: lib/net: Defining dependency "net" 00:01:43.132 Message: lib/meter: Defining dependency "meter" 00:01:43.132 Message: lib/ethdev: Defining dependency "ethdev" 00:01:43.132 Message: lib/pci: Defining dependency "pci" 00:01:43.132 Message: lib/cmdline: Defining dependency "cmdline" 00:01:43.132 Message: lib/metrics: Defining dependency "metrics" 00:01:43.132 Message: lib/hash: Defining dependency "hash" 00:01:43.132 Message: lib/timer: Defining dependency "timer" 00:01:43.132 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:43.132 Compiler for C supports arguments -mavx2: YES (cached) 00:01:43.132 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.132 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:43.132 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:43.132 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:43.132 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:43.132 Message: lib/acl: Defining dependency "acl" 00:01:43.132 Message: lib/bbdev: Defining dependency "bbdev" 00:01:43.132 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:43.132 Run-time dependency libelf found: YES 0.190 00:01:43.132 Message: lib/bpf: Defining dependency "bpf" 00:01:43.132 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:43.132 Message: lib/compressdev: Defining dependency "compressdev" 00:01:43.132 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:43.132 Message: lib/distributor: Defining dependency "distributor" 00:01:43.132 Message: lib/efd: Defining dependency "efd" 00:01:43.132 Message: lib/eventdev: Defining dependency "eventdev" 00:01:43.132 Message: lib/gpudev: Defining dependency "gpudev" 00:01:43.132 Message: lib/gro: Defining dependency "gro" 00:01:43.132 Message: lib/gso: Defining dependency "gso" 00:01:43.132 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:43.132 Message: lib/jobstats: Defining dependency "jobstats" 00:01:43.132 Message: lib/latencystats: Defining dependency "latencystats" 00:01:43.132 Message: lib/lpm: Defining dependency "lpm" 00:01:43.132 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.132 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:43.132 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:43.132 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:43.132 Message: lib/member: Defining dependency "member" 00:01:43.132 Message: lib/pcapng: Defining dependency "pcapng" 00:01:43.132 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:43.132 Message: lib/power: Defining dependency "power" 00:01:43.132 Message: lib/rawdev: Defining dependency "rawdev" 00:01:43.132 Message: lib/regexdev: Defining dependency "regexdev" 00:01:43.132 Message: lib/dmadev: Defining dependency "dmadev" 00:01:43.132 Message: lib/rib: Defining dependency "rib" 00:01:43.132 Message: lib/reorder: Defining dependency "reorder" 00:01:43.132 Message: lib/sched: Defining dependency "sched" 00:01:43.132 Message: lib/security: Defining dependency "security" 00:01:43.132 Message: lib/stack: Defining dependency "stack" 00:01:43.132 Has header "linux/userfaultfd.h" : YES 00:01:43.132 Message: lib/vhost: Defining dependency "vhost" 00:01:43.132 Message: lib/ipsec: Defining dependency "ipsec" 00:01:43.132 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.132 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:43.132 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:43.132 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:43.132 Message: lib/fib: Defining dependency "fib" 00:01:43.132 Message: lib/port: Defining dependency "port" 00:01:43.132 Message: lib/pdump: Defining dependency "pdump" 00:01:43.132 Message: lib/table: Defining dependency "table" 00:01:43.132 Message: lib/pipeline: Defining dependency "pipeline" 00:01:43.132 Message: lib/graph: Defining dependency "graph" 00:01:43.132 Message: lib/node: Defining dependency "node" 00:01:43.132 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:43.132 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:43.132 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:43.132 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:43.132 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:43.132 Compiler for C supports arguments -Wno-unused-value: YES 00:01:44.074 Compiler for C supports arguments -Wno-format: YES 00:01:44.074 Compiler for C supports arguments -Wno-format-security: YES 00:01:44.074 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:44.074 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:44.074 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:44.074 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:44.074 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:44.074 Compiler for C supports arguments -mavx2: YES (cached) 00:01:44.074 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:44.074 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:44.074 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:44.074 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:44.074 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:44.074 Program doxygen found: YES (/usr/bin/doxygen) 00:01:44.074 Configuring doxy-api.conf using configuration 00:01:44.074 Program sphinx-build found: NO 00:01:44.074 Configuring rte_build_config.h using configuration 00:01:44.074 Message: 00:01:44.074 ================= 00:01:44.074 Applications Enabled 00:01:44.074 ================= 00:01:44.074 00:01:44.074 apps: 00:01:44.074 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:44.074 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:44.074 test-security-perf, 00:01:44.074 00:01:44.074 Message: 00:01:44.074 ================= 00:01:44.074 Libraries Enabled 00:01:44.074 ================= 00:01:44.074 00:01:44.074 libs: 00:01:44.074 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:44.074 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:44.074 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:44.074 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:44.074 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:44.074 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:44.074 table, pipeline, graph, node, 00:01:44.074 00:01:44.074 Message: 00:01:44.074 =============== 00:01:44.074 Drivers Enabled 00:01:44.074 =============== 00:01:44.074 00:01:44.074 common: 00:01:44.074 00:01:44.074 bus: 00:01:44.074 pci, vdev, 00:01:44.074 mempool: 00:01:44.074 ring, 00:01:44.074 dma: 00:01:44.074 00:01:44.074 net: 00:01:44.074 i40e, 00:01:44.074 raw: 00:01:44.074 00:01:44.074 crypto: 00:01:44.074 00:01:44.074 compress: 00:01:44.074 00:01:44.074 regex: 00:01:44.074 00:01:44.074 vdpa: 00:01:44.074 00:01:44.074 event: 00:01:44.074 00:01:44.074 baseband: 00:01:44.074 00:01:44.074 gpu: 00:01:44.074 00:01:44.074 00:01:44.074 Message: 00:01:44.074 ================= 00:01:44.074 Content Skipped 00:01:44.074 ================= 00:01:44.074 00:01:44.074 apps: 00:01:44.074 00:01:44.074 libs: 00:01:44.074 kni: explicitly disabled via build config (deprecated lib) 00:01:44.074 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:44.074 00:01:44.074 drivers: 00:01:44.074 common/cpt: not in enabled drivers build config 00:01:44.074 common/dpaax: not in enabled drivers build config 00:01:44.074 common/iavf: not in enabled drivers build config 00:01:44.074 common/idpf: not in enabled drivers build config 00:01:44.074 common/mvep: not in enabled drivers build config 00:01:44.074 common/octeontx: not in enabled drivers build config 00:01:44.074 bus/auxiliary: not in enabled drivers build config 00:01:44.074 bus/dpaa: not in enabled drivers build config 00:01:44.074 bus/fslmc: not in enabled drivers build config 00:01:44.074 bus/ifpga: not in enabled drivers build config 00:01:44.075 bus/vmbus: not in enabled drivers build config 00:01:44.075 common/cnxk: not in enabled drivers build config 00:01:44.075 common/mlx5: not in enabled drivers build config 00:01:44.075 common/qat: not in enabled drivers build config 00:01:44.075 common/sfc_efx: not in enabled drivers build config 00:01:44.075 mempool/bucket: not in enabled drivers build config 00:01:44.075 mempool/cnxk: not in enabled drivers build config 00:01:44.075 mempool/dpaa: not in enabled drivers build config 00:01:44.075 mempool/dpaa2: not in enabled drivers build config 00:01:44.075 mempool/octeontx: not in enabled drivers build config 00:01:44.075 mempool/stack: not in enabled drivers build config 00:01:44.075 dma/cnxk: not in enabled drivers build config 00:01:44.075 dma/dpaa: not in enabled drivers build config 00:01:44.075 dma/dpaa2: not in enabled drivers build config 00:01:44.075 dma/hisilicon: not in enabled drivers build config 00:01:44.075 dma/idxd: not in enabled drivers build config 00:01:44.075 dma/ioat: not in enabled drivers build config 00:01:44.075 dma/skeleton: not in enabled drivers build config 00:01:44.075 net/af_packet: not in enabled drivers build config 00:01:44.075 net/af_xdp: not in enabled drivers build config 00:01:44.075 net/ark: not in enabled drivers build config 00:01:44.075 net/atlantic: not in enabled drivers build config 00:01:44.075 net/avp: not in enabled drivers build config 00:01:44.075 net/axgbe: not in enabled drivers build config 00:01:44.075 net/bnx2x: not in enabled drivers build config 00:01:44.075 net/bnxt: not in enabled drivers build config 00:01:44.075 net/bonding: not in enabled drivers build config 00:01:44.075 net/cnxk: not in enabled drivers build config 00:01:44.075 net/cxgbe: not in enabled drivers build config 00:01:44.075 net/dpaa: not in enabled drivers build config 00:01:44.075 net/dpaa2: not in enabled drivers build config 00:01:44.075 net/e1000: not in enabled drivers build config 00:01:44.075 net/ena: not in enabled drivers build config 00:01:44.075 net/enetc: not in enabled drivers build config 00:01:44.075 net/enetfec: not in enabled drivers build config 00:01:44.075 net/enic: not in enabled drivers build config 00:01:44.075 net/failsafe: not in enabled drivers build config 00:01:44.075 net/fm10k: not in enabled drivers build config 00:01:44.075 net/gve: not in enabled drivers build config 00:01:44.075 net/hinic: not in enabled drivers build config 00:01:44.075 net/hns3: not in enabled drivers build config 00:01:44.075 net/iavf: not in enabled drivers build config 00:01:44.075 net/ice: not in enabled drivers build config 00:01:44.075 net/idpf: not in enabled drivers build config 00:01:44.075 net/igc: not in enabled drivers build config 00:01:44.075 net/ionic: not in enabled drivers build config 00:01:44.075 net/ipn3ke: not in enabled drivers build config 00:01:44.075 net/ixgbe: not in enabled drivers build config 00:01:44.075 net/kni: not in enabled drivers build config 00:01:44.075 net/liquidio: not in enabled drivers build config 00:01:44.075 net/mana: not in enabled drivers build config 00:01:44.075 net/memif: not in enabled drivers build config 00:01:44.075 net/mlx4: not in enabled drivers build config 00:01:44.075 net/mlx5: not in enabled drivers build config 00:01:44.075 net/mvneta: not in enabled drivers build config 00:01:44.075 net/mvpp2: not in enabled drivers build config 00:01:44.075 net/netvsc: not in enabled drivers build config 00:01:44.075 net/nfb: not in enabled drivers build config 00:01:44.075 net/nfp: not in enabled drivers build config 00:01:44.075 net/ngbe: not in enabled drivers build config 00:01:44.075 net/null: not in enabled drivers build config 00:01:44.075 net/octeontx: not in enabled drivers build config 00:01:44.075 net/octeon_ep: not in enabled drivers build config 00:01:44.075 net/pcap: not in enabled drivers build config 00:01:44.075 net/pfe: not in enabled drivers build config 00:01:44.075 net/qede: not in enabled drivers build config 00:01:44.075 net/ring: not in enabled drivers build config 00:01:44.075 net/sfc: not in enabled drivers build config 00:01:44.075 net/softnic: not in enabled drivers build config 00:01:44.075 net/tap: not in enabled drivers build config 00:01:44.075 net/thunderx: not in enabled drivers build config 00:01:44.075 net/txgbe: not in enabled drivers build config 00:01:44.075 net/vdev_netvsc: not in enabled drivers build config 00:01:44.075 net/vhost: not in enabled drivers build config 00:01:44.075 net/virtio: not in enabled drivers build config 00:01:44.075 net/vmxnet3: not in enabled drivers build config 00:01:44.075 raw/cnxk_bphy: not in enabled drivers build config 00:01:44.075 raw/cnxk_gpio: not in enabled drivers build config 00:01:44.075 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:44.075 raw/ifpga: not in enabled drivers build config 00:01:44.075 raw/ntb: not in enabled drivers build config 00:01:44.075 raw/skeleton: not in enabled drivers build config 00:01:44.075 crypto/armv8: not in enabled drivers build config 00:01:44.075 crypto/bcmfs: not in enabled drivers build config 00:01:44.075 crypto/caam_jr: not in enabled drivers build config 00:01:44.075 crypto/ccp: not in enabled drivers build config 00:01:44.075 crypto/cnxk: not in enabled drivers build config 00:01:44.075 crypto/dpaa_sec: not in enabled drivers build config 00:01:44.075 crypto/dpaa2_sec: not in enabled drivers build config 00:01:44.075 crypto/ipsec_mb: not in enabled drivers build config 00:01:44.075 crypto/mlx5: not in enabled drivers build config 00:01:44.075 crypto/mvsam: not in enabled drivers build config 00:01:44.075 crypto/nitrox: not in enabled drivers build config 00:01:44.075 crypto/null: not in enabled drivers build config 00:01:44.075 crypto/octeontx: not in enabled drivers build config 00:01:44.075 crypto/openssl: not in enabled drivers build config 00:01:44.075 crypto/scheduler: not in enabled drivers build config 00:01:44.075 crypto/uadk: not in enabled drivers build config 00:01:44.075 crypto/virtio: not in enabled drivers build config 00:01:44.075 compress/isal: not in enabled drivers build config 00:01:44.075 compress/mlx5: not in enabled drivers build config 00:01:44.075 compress/octeontx: not in enabled drivers build config 00:01:44.075 compress/zlib: not in enabled drivers build config 00:01:44.075 regex/mlx5: not in enabled drivers build config 00:01:44.075 regex/cn9k: not in enabled drivers build config 00:01:44.075 vdpa/ifc: not in enabled drivers build config 00:01:44.075 vdpa/mlx5: not in enabled drivers build config 00:01:44.075 vdpa/sfc: not in enabled drivers build config 00:01:44.075 event/cnxk: not in enabled drivers build config 00:01:44.075 event/dlb2: not in enabled drivers build config 00:01:44.075 event/dpaa: not in enabled drivers build config 00:01:44.075 event/dpaa2: not in enabled drivers build config 00:01:44.075 event/dsw: not in enabled drivers build config 00:01:44.075 event/opdl: not in enabled drivers build config 00:01:44.075 event/skeleton: not in enabled drivers build config 00:01:44.075 event/sw: not in enabled drivers build config 00:01:44.075 event/octeontx: not in enabled drivers build config 00:01:44.075 baseband/acc: not in enabled drivers build config 00:01:44.075 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:44.075 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:44.075 baseband/la12xx: not in enabled drivers build config 00:01:44.075 baseband/null: not in enabled drivers build config 00:01:44.075 baseband/turbo_sw: not in enabled drivers build config 00:01:44.075 gpu/cuda: not in enabled drivers build config 00:01:44.075 00:01:44.075 00:01:44.075 Build targets in project: 316 00:01:44.075 00:01:44.075 DPDK 22.11.4 00:01:44.075 00:01:44.075 User defined options 00:01:44.075 libdir : lib 00:01:44.075 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:44.075 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:44.075 c_link_args : 00:01:44.075 enable_docs : false 00:01:44.075 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:44.075 enable_kmods : false 00:01:44.075 machine : native 00:01:44.075 tests : false 00:01:44.075 00:01:44.075 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.075 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:44.075 20:07:22 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:44.075 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:44.075 [1/745] Generating lib/rte_kvargs_def with a custom command 00:01:44.075 [2/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:44.075 [3/745] Generating lib/rte_telemetry_def with a custom command 00:01:44.075 [4/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:44.075 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:44.075 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:44.075 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:44.075 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:44.075 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:44.075 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:44.075 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:44.075 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:44.342 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:44.342 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:44.342 [15/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:44.342 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:44.342 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:44.342 [18/745] Linking static target lib/librte_kvargs.a 00:01:44.342 [19/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:44.342 [20/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:44.342 [21/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:44.342 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:44.342 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:44.342 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:44.342 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:44.342 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:44.342 [27/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:44.342 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:44.342 [29/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:44.342 [30/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:44.342 [31/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:44.342 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:44.342 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:44.342 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:44.342 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:44.342 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:44.342 [37/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:44.342 [38/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:44.342 [39/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:44.342 [40/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:44.342 [41/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:44.342 [42/745] Generating lib/rte_eal_mingw with a custom command 00:01:44.342 [43/745] Generating lib/rte_eal_def with a custom command 00:01:44.342 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:44.342 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:44.343 [46/745] Generating lib/rte_ring_def with a custom command 00:01:44.343 [47/745] Generating lib/rte_ring_mingw with a custom command 00:01:44.343 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:44.343 [49/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:44.343 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:44.343 [51/745] Generating lib/rte_rcu_def with a custom command 00:01:44.343 [52/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:44.343 [53/745] Generating lib/rte_rcu_mingw with a custom command 00:01:44.343 [54/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:44.343 [55/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:44.343 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:44.343 [57/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:44.343 [58/745] Generating lib/rte_mempool_def with a custom command 00:01:44.343 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:44.343 [60/745] Generating lib/rte_mempool_mingw with a custom command 00:01:44.608 [61/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:44.608 [62/745] Generating lib/rte_mbuf_def with a custom command 00:01:44.608 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:44.608 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:44.608 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:44.608 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:44.608 [67/745] Generating lib/rte_net_def with a custom command 00:01:44.608 [68/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:44.608 [69/745] Generating lib/rte_net_mingw with a custom command 00:01:44.608 [70/745] Generating lib/rte_meter_def with a custom command 00:01:44.608 [71/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:44.608 [72/745] Generating lib/rte_meter_mingw with a custom command 00:01:44.608 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:44.608 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:44.608 [75/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:44.608 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:44.608 [77/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:44.608 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:44.608 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.608 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:44.608 [81/745] Linking static target lib/librte_ring.a 00:01:44.608 [82/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:44.608 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:44.608 [84/745] Linking target lib/librte_kvargs.so.23.0 00:01:44.871 [85/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:44.871 [86/745] Generating lib/rte_pci_def with a custom command 00:01:44.871 [87/745] Linking static target lib/librte_meter.a 00:01:44.871 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:44.871 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:44.871 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:44.871 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:44.871 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:44.871 [93/745] Linking static target lib/librte_pci.a 00:01:44.871 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:44.871 [95/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:44.871 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:44.871 [97/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:45.136 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:45.136 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.136 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:45.136 [101/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.136 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:45.136 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:45.136 [104/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.136 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:45.136 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:45.136 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:45.136 [108/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:45.136 [109/745] Generating lib/rte_cmdline_def with a custom command 00:01:45.136 [110/745] Linking static target lib/librte_telemetry.a 00:01:45.136 [111/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:45.136 [112/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:45.136 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:45.136 [114/745] Generating lib/rte_metrics_def with a custom command 00:01:45.136 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:45.399 [116/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:45.399 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:45.399 [118/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:45.399 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:45.399 [120/745] Generating lib/rte_hash_def with a custom command 00:01:45.399 [121/745] Generating lib/rte_timer_def with a custom command 00:01:45.399 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:45.399 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:45.399 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:45.715 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:45.715 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:45.715 [127/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:45.715 [128/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:45.715 [129/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:45.715 [130/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:45.715 [131/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:45.715 [132/745] Generating lib/rte_acl_def with a custom command 00:01:45.715 [133/745] Generating lib/rte_acl_mingw with a custom command 00:01:45.715 [134/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:45.715 [135/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:45.715 [136/745] Generating lib/rte_bbdev_def with a custom command 00:01:45.715 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:45.715 [138/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.715 [139/745] Generating lib/rte_bitratestats_def with a custom command 00:01:45.715 [140/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:45.715 [141/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:45.715 [142/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:45.715 [143/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:45.715 [144/745] Linking target lib/librte_telemetry.so.23.0 00:01:45.715 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:45.715 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:45.976 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:45.976 [148/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:45.976 [149/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:45.976 [150/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:45.976 [151/745] Generating lib/rte_bpf_def with a custom command 00:01:45.976 [152/745] Generating lib/rte_bpf_mingw with a custom command 00:01:45.976 [153/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:45.976 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:45.976 [155/745] Generating lib/rte_cfgfile_def with a custom command 00:01:45.976 [156/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:45.976 [157/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:45.976 [158/745] Generating lib/rte_compressdev_def with a custom command 00:01:45.976 [159/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:45.976 [160/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:45.976 [161/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:45.976 [162/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:45.976 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:45.976 [164/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:45.976 [165/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:45.976 [166/745] Generating lib/rte_cryptodev_def with a custom command 00:01:46.242 [167/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:46.242 [168/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:46.242 [169/745] Generating lib/rte_distributor_mingw with a custom command 00:01:46.242 [170/745] Linking static target lib/librte_rcu.a 00:01:46.242 [171/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:46.242 [172/745] Generating lib/rte_distributor_def with a custom command 00:01:46.242 [173/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:46.242 [174/745] Linking static target lib/librte_timer.a 00:01:46.242 [175/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:46.242 [176/745] Generating lib/rte_efd_def with a custom command 00:01:46.242 [177/745] Linking static target lib/librte_cmdline.a 00:01:46.242 [178/745] Generating lib/rte_efd_mingw with a custom command 00:01:46.242 [179/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:46.242 [180/745] Linking static target lib/librte_net.a 00:01:46.242 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:46.500 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:46.500 [183/745] Linking static target lib/librte_metrics.a 00:01:46.500 [184/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:46.500 [185/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:46.500 [186/745] Linking static target lib/librte_mempool.a 00:01:46.500 [187/745] Linking static target lib/librte_cfgfile.a 00:01:46.500 [188/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.759 [189/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:46.759 [190/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.759 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.759 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:46.759 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:46.759 [194/745] Generating lib/rte_eventdev_def with a custom command 00:01:46.759 [195/745] Linking static target lib/librte_eal.a 00:01:46.759 [196/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:46.759 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:46.759 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:46.759 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:47.021 [200/745] Generating lib/rte_gpudev_def with a custom command 00:01:47.021 [201/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:47.021 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:47.021 [203/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:47.021 [204/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:47.021 [205/745] Linking static target lib/librte_bitratestats.a 00:01:47.021 [206/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.021 [207/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:47.021 [208/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.021 [209/745] Generating lib/rte_gro_def with a custom command 00:01:47.021 [210/745] Generating lib/rte_gro_mingw with a custom command 00:01:47.021 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:47.021 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:47.281 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:47.281 [214/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:47.281 [215/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.281 [216/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:47.281 [217/745] Generating lib/rte_gso_mingw with a custom command 00:01:47.281 [218/745] Generating lib/rte_gso_def with a custom command 00:01:47.281 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:47.281 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:47.548 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:47.548 [222/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:47.548 [223/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:47.548 [224/745] Generating lib/rte_ip_frag_def with a custom command 00:01:47.548 [225/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:47.548 [226/745] Linking static target lib/librte_bbdev.a 00:01:47.548 [227/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:47.548 [228/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.805 [229/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:47.805 [230/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:47.805 [231/745] Generating lib/rte_jobstats_def with a custom command 00:01:47.805 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:47.805 [233/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.805 [234/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:47.805 [235/745] Generating lib/rte_latencystats_def with a custom command 00:01:47.805 [236/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:47.805 [237/745] Linking static target lib/librte_compressdev.a 00:01:47.805 [238/745] Generating lib/rte_lpm_def with a custom command 00:01:47.805 [239/745] Generating lib/rte_lpm_mingw with a custom command 00:01:47.805 [240/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:47.805 [241/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:47.805 [242/745] Linking static target lib/librte_jobstats.a 00:01:47.805 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:48.067 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:48.067 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:48.067 [246/745] Linking static target lib/librte_distributor.a 00:01:48.067 [247/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:48.067 [248/745] Generating lib/rte_member_def with a custom command 00:01:48.326 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:48.326 [250/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.326 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:48.326 [252/745] Generating lib/rte_pcapng_def with a custom command 00:01:48.326 [253/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:48.326 [254/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:48.326 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:48.326 [256/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:48.326 [257/745] Linking static target lib/librte_bpf.a 00:01:48.592 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:48.592 [259/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.592 [260/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:48.592 [261/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.592 [262/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.592 [263/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:48.592 [264/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:48.592 [265/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.592 [266/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:48.592 [267/745] Generating lib/rte_power_def with a custom command 00:01:48.592 [268/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:48.592 [269/745] Generating lib/rte_power_mingw with a custom command 00:01:48.592 [270/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:48.592 [271/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:48.592 [272/745] Linking static target lib/librte_gro.a 00:01:48.592 [273/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.592 [274/745] Linking static target lib/librte_gpudev.a 00:01:48.592 [275/745] Generating lib/rte_rawdev_def with a custom command 00:01:48.592 [276/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:48.592 [277/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:48.592 [278/745] Generating lib/rte_regexdev_def with a custom command 00:01:48.592 [279/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:48.850 [280/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:48.850 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:48.850 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:48.850 [283/745] Generating lib/rte_rib_def with a custom command 00:01:48.850 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:48.850 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:48.850 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:48.850 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:01:48.850 [288/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.850 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:49.111 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.111 [291/745] Generating lib/rte_sched_def with a custom command 00:01:49.111 [292/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.111 [293/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:49.111 [294/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:49.111 [295/745] Generating lib/rte_sched_mingw with a custom command 00:01:49.111 [296/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:49.111 [297/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:49.111 [298/745] Generating lib/rte_security_mingw with a custom command 00:01:49.111 [299/745] Generating lib/rte_security_def with a custom command 00:01:49.111 [300/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:49.111 [301/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:49.111 [302/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:49.111 [303/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:49.111 [304/745] Generating lib/rte_stack_mingw with a custom command 00:01:49.111 [305/745] Generating lib/rte_stack_def with a custom command 00:01:49.374 [306/745] Linking static target lib/librte_latencystats.a 00:01:49.374 [307/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:49.374 [308/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:49.374 [309/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:49.374 [310/745] Linking static target lib/librte_rawdev.a 00:01:49.374 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:49.374 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:49.374 [313/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:49.374 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:49.374 [315/745] Linking static target lib/librte_stack.a 00:01:49.374 [316/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:49.374 [317/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:49.374 [318/745] Generating lib/rte_vhost_def with a custom command 00:01:49.374 [319/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:49.374 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:01:49.374 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:49.374 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:49.374 [323/745] Linking static target lib/librte_dmadev.a 00:01:49.374 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:49.637 [325/745] Linking static target lib/librte_ip_frag.a 00:01:49.637 [326/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.637 [327/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.637 [328/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:49.899 [329/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:49.899 [330/745] Generating lib/rte_ipsec_def with a custom command 00:01:49.899 [331/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:49.899 [332/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:49.899 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:49.899 [334/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.161 [335/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.161 [336/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.161 [337/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:50.161 [338/745] Generating lib/rte_fib_def with a custom command 00:01:50.161 [339/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:50.161 [340/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:50.161 [341/745] Generating lib/rte_fib_mingw with a custom command 00:01:50.161 [342/745] Linking static target lib/librte_gso.a 00:01:50.161 [343/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:50.161 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:50.161 [345/745] Linking static target lib/librte_regexdev.a 00:01:50.420 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.420 [347/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:50.420 [348/745] Linking static target lib/librte_pcapng.a 00:01:50.420 [349/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.420 [350/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:50.420 [351/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:50.420 [352/745] Linking static target lib/librte_efd.a 00:01:50.681 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:50.681 [354/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:50.682 [355/745] Linking static target lib/librte_lpm.a 00:01:50.682 [356/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:50.682 [357/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:50.682 [358/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:50.682 [359/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:50.682 [360/745] Linking static target lib/librte_reorder.a 00:01:50.682 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:50.943 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.943 [363/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.943 [364/745] Generating lib/rte_port_def with a custom command 00:01:50.943 [365/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:50.943 [366/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:50.943 [367/745] Generating lib/rte_port_mingw with a custom command 00:01:50.943 [368/745] Generating lib/rte_pdump_def with a custom command 00:01:50.943 [369/745] Generating lib/rte_pdump_mingw with a custom command 00:01:51.203 [370/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:51.203 [371/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:51.203 [372/745] Linking static target lib/acl/libavx2_tmp.a 00:01:51.203 [373/745] Linking static target lib/librte_security.a 00:01:51.203 [374/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:51.203 [375/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:51.203 [376/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:51.203 [377/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:51.203 [378/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.203 [379/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:51.203 [380/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:51.203 [381/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:51.203 [382/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:51.203 [383/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:51.203 [384/745] Linking static target lib/librte_power.a 00:01:51.203 [385/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.203 [386/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:51.203 [387/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.465 [388/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:51.465 [389/745] Linking static target lib/librte_hash.a 00:01:51.465 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:51.465 [391/745] Linking static target lib/librte_rib.a 00:01:51.465 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:51.726 [393/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:51.726 [394/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:51.726 [395/745] Linking static target lib/acl/libavx512_tmp.a 00:01:51.726 [396/745] Linking static target lib/librte_acl.a 00:01:51.726 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:51.726 [398/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.726 [399/745] Linking static target lib/librte_ethdev.a 00:01:51.726 [400/745] Generating lib/rte_table_def with a custom command 00:01:51.726 [401/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.726 [402/745] Generating lib/rte_table_mingw with a custom command 00:01:51.993 [403/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:52.252 [404/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.252 [405/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.252 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:52.252 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:52.252 [408/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:52.252 [409/745] Linking static target lib/librte_mbuf.a 00:01:52.252 [410/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:52.252 [411/745] Generating lib/rte_pipeline_def with a custom command 00:01:52.516 [412/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:52.516 [413/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:52.516 [414/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:52.516 [415/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:52.516 [416/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.516 [417/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:52.516 [418/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:52.516 [419/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:52.516 [420/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:52.516 [421/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:52.516 [422/745] Generating lib/rte_graph_def with a custom command 00:01:52.516 [423/745] Linking static target lib/librte_fib.a 00:01:52.516 [424/745] Generating lib/rte_graph_mingw with a custom command 00:01:52.516 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:52.516 [426/745] Linking static target lib/librte_member.a 00:01:52.783 [427/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:52.783 [428/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:52.783 [429/745] Linking static target lib/librte_eventdev.a 00:01:52.783 [430/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:52.783 [431/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.783 [432/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:52.783 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:52.783 [434/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:52.783 [435/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:52.783 [436/745] Generating lib/rte_node_def with a custom command 00:01:52.783 [437/745] Generating lib/rte_node_mingw with a custom command 00:01:52.783 [438/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:53.047 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:53.047 [440/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.047 [441/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.047 [442/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.047 [443/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:53.047 [444/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:53.047 [445/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.047 [446/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.047 [447/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:53.047 [448/745] Linking static target lib/librte_sched.a 00:01:53.315 [449/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:53.315 [450/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:53.315 [451/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:53.315 [452/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.315 [453/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.315 [454/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:53.315 [455/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:53.315 [456/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:53.315 [457/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:53.315 [458/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:53.315 [459/745] Linking static target lib/librte_cryptodev.a 00:01:53.315 [460/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:53.315 [461/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:53.574 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:53.574 [463/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:53.574 [464/745] Linking static target lib/librte_pdump.a 00:01:53.574 [465/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.574 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:53.574 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:53.574 [468/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:53.574 [469/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:53.574 [470/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:53.836 [471/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:53.836 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:53.836 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:53.836 [474/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:53.836 [475/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.836 [476/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:53.836 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:53.836 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:54.103 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:54.103 [480/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:54.103 [481/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.103 [482/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:54.103 [483/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:54.103 [484/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.103 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.103 [486/745] Linking static target drivers/librte_bus_vdev.a 00:01:54.103 [487/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:54.103 [488/745] Linking static target lib/librte_table.a 00:01:54.103 [489/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:54.103 [490/745] Linking static target lib/librte_ipsec.a 00:01:54.367 [491/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:54.367 [492/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.367 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.367 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:54.629 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.629 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:54.629 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:54.629 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:54.629 [499/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:54.891 [500/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:54.891 [501/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:54.891 [502/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:54.891 [503/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.891 [504/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.891 [505/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:54.891 [506/745] Linking static target lib/librte_graph.a 00:01:54.891 [507/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.891 [508/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.891 [509/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:54.891 [510/745] Linking static target drivers/librte_bus_pci.a 00:01:54.891 [511/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:55.151 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:55.151 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:55.418 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.418 [515/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.418 [516/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:55.677 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:55.677 [518/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:55.677 [519/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.677 [520/745] Linking static target lib/librte_port.a 00:01:55.677 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:55.677 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:55.942 [523/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:55.942 [524/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:55.942 [525/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:55.942 [526/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:56.208 [527/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:56.208 [528/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:56.208 [529/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.208 [530/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.208 [531/745] Linking static target drivers/librte_mempool_ring.a 00:01:56.208 [532/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.208 [533/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:56.470 [534/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:56.470 [535/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:56.470 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:56.470 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:56.470 [538/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.734 [539/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:56.734 [540/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:56.734 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.997 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:56.997 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:57.263 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:57.263 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:57.263 [546/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:57.263 [547/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:57.263 [548/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:57.263 [549/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:57.263 [550/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:57.528 [551/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:57.787 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:57.787 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:57.787 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:57.787 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:58.051 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:58.051 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:58.051 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:58.051 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:58.632 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:58.632 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:58.632 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:58.632 [563/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:58.632 [564/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:58.632 [565/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:58.632 [566/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:58.632 [567/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:58.632 [568/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:58.632 [569/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:58.894 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:58.894 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:59.172 [572/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:59.172 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:59.172 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:59.172 [575/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.172 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:59.173 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:59.173 [578/745] Linking target lib/librte_eal.so.23.0 00:01:59.455 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:59.455 [580/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:59.455 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:59.455 [582/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:59.455 [583/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:59.455 [584/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:59.455 [585/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:59.455 [586/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.455 [587/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:59.455 [588/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:59.730 [589/745] Linking target lib/librte_ring.so.23.0 00:01:59.730 [590/745] Linking target lib/librte_meter.so.23.0 00:01:59.730 [591/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:59.730 [592/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:59.997 [593/745] Linking target lib/librte_rcu.so.23.0 00:01:59.997 [594/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:59.997 [595/745] Linking target lib/librte_mempool.so.23.0 00:01:59.997 [596/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:59.997 [597/745] Linking target lib/librte_pci.so.23.0 00:01:59.997 [598/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:00.268 [599/745] Linking target lib/librte_timer.so.23.0 00:02:00.268 [600/745] Linking target lib/librte_acl.so.23.0 00:02:00.268 [601/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:00.268 [602/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:00.268 [603/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:00.268 [604/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:00.268 [605/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:00.268 [606/745] Linking target lib/librte_mbuf.so.23.0 00:02:00.268 [607/745] Linking target lib/librte_cfgfile.so.23.0 00:02:00.268 [608/745] Linking target lib/librte_jobstats.so.23.0 00:02:00.268 [609/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:00.530 [610/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:00.530 [611/745] Linking target lib/librte_rawdev.so.23.0 00:02:00.530 [612/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:00.530 [613/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:00.530 [614/745] Linking target lib/librte_dmadev.so.23.0 00:02:00.530 [615/745] Linking target lib/librte_stack.so.23.0 00:02:00.530 [616/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:00.530 [617/745] Linking target lib/librte_rib.so.23.0 00:02:00.530 [618/745] Linking target lib/librte_graph.so.23.0 00:02:00.530 [619/745] Linking target drivers/librte_bus_pci.so.23.0 00:02:00.530 [620/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:00.530 [621/745] Linking target drivers/librte_mempool_ring.so.23.0 00:02:00.530 [622/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:00.530 [623/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:00.530 [624/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:00.530 [625/745] Linking target lib/librte_net.so.23.0 00:02:00.789 [626/745] Linking target lib/librte_distributor.so.23.0 00:02:00.789 [627/745] Linking target lib/librte_compressdev.so.23.0 00:02:00.789 [628/745] Linking target lib/librte_bbdev.so.23.0 00:02:00.789 [629/745] Linking target lib/librte_cryptodev.so.23.0 00:02:00.789 [630/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:00.789 [631/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:00.789 [632/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:00.789 [633/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:00.789 [634/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:00.789 [635/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:00.789 [636/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:00.789 [637/745] Linking target lib/librte_gpudev.so.23.0 00:02:00.789 [638/745] Linking target lib/librte_reorder.so.23.0 00:02:00.789 [639/745] Linking target lib/librte_regexdev.so.23.0 00:02:00.789 [640/745] Linking target lib/librte_sched.so.23.0 00:02:00.789 [641/745] Linking target lib/librte_fib.so.23.0 00:02:00.789 [642/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:00.789 [643/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:00.789 [644/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:00.789 [645/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:00.789 [646/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:00.789 [647/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:00.789 [648/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:01.049 [649/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:01.049 [650/745] Linking target lib/librte_security.so.23.0 00:02:01.049 [651/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:01.049 [652/745] Linking target lib/librte_cmdline.so.23.0 00:02:01.049 [653/745] Linking target lib/librte_hash.so.23.0 00:02:01.049 [654/745] Linking target lib/librte_ethdev.so.23.0 00:02:01.049 [655/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:01.049 [656/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:01.049 [657/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:01.049 [658/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:01.049 [659/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:01.049 [660/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:01.049 [661/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:01.308 [662/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:01.308 [663/745] Linking target lib/librte_bpf.so.23.0 00:02:01.308 [664/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:01.308 [665/745] Linking target lib/librte_metrics.so.23.0 00:02:01.308 [666/745] Linking target lib/librte_efd.so.23.0 00:02:01.308 [667/745] Linking target lib/librte_lpm.so.23.0 00:02:01.308 [668/745] Linking target lib/librte_gso.so.23.0 00:02:01.308 [669/745] Linking target lib/librte_gro.so.23.0 00:02:01.308 [670/745] Linking target lib/librte_pcapng.so.23.0 00:02:01.308 [671/745] Linking target lib/librte_member.so.23.0 00:02:01.308 [672/745] Linking target lib/librte_eventdev.so.23.0 00:02:01.308 [673/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:01.308 [674/745] Linking target lib/librte_ip_frag.so.23.0 00:02:01.308 [675/745] Linking target lib/librte_ipsec.so.23.0 00:02:01.308 [676/745] Linking target lib/librte_power.so.23.0 00:02:01.308 [677/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:01.308 [678/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:01.308 [679/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:01.308 [680/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:01.308 [681/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:01.308 [682/745] Linking target lib/librte_bitratestats.so.23.0 00:02:01.308 [683/745] Linking target lib/librte_latencystats.so.23.0 00:02:01.308 [684/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:01.576 [685/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:01.576 [686/745] Linking target lib/librte_pdump.so.23.0 00:02:01.576 [687/745] Linking target lib/librte_port.so.23.0 00:02:01.576 [688/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:01.576 [689/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:01.576 [690/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:01.576 [691/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:01.576 [692/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:01.834 [693/745] Linking target lib/librte_table.so.23.0 00:02:01.834 [694/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:02.092 [695/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:02.349 [696/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:02.349 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:02.349 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:02.607 [699/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:02.607 [700/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:02.607 [701/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:02.864 [702/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:02.864 [703/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:03.121 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:03.121 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:03.121 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:03.121 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:03.378 [708/745] Linking static target drivers/librte_net_i40e.a 00:02:03.378 [709/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:03.636 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:03.895 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.895 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:02:04.830 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:04.830 [714/745] Linking static target lib/librte_node.a 00:02:04.830 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.086 [716/745] Linking target lib/librte_node.so.23.0 00:02:05.086 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:05.650 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:06.217 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:14.327 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:46.452 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:46.452 [722/745] Linking static target lib/librte_vhost.a 00:02:46.452 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.452 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:56.429 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:56.429 [726/745] Linking static target lib/librte_pipeline.a 00:02:56.429 [727/745] Linking target app/dpdk-test-acl 00:02:56.429 [728/745] Linking target app/dpdk-test-sad 00:02:56.429 [729/745] Linking target app/dpdk-test-cmdline 00:02:56.429 [730/745] Linking target app/dpdk-test-regex 00:02:56.429 [731/745] Linking target app/dpdk-test-flow-perf 00:02:56.429 [732/745] Linking target app/dpdk-test-bbdev 00:02:56.429 [733/745] Linking target app/dpdk-test-crypto-perf 00:02:56.429 [734/745] Linking target app/dpdk-test-compress-perf 00:02:56.429 [735/745] Linking target app/dpdk-pdump 00:02:56.429 [736/745] Linking target app/dpdk-test-fib 00:02:56.429 [737/745] Linking target app/dpdk-dumpcap 00:02:56.429 [738/745] Linking target app/dpdk-proc-info 00:02:56.429 [739/745] Linking target app/dpdk-test-gpudev 00:02:56.429 [740/745] Linking target app/dpdk-test-security-perf 00:02:56.429 [741/745] Linking target app/dpdk-test-pipeline 00:02:56.429 [742/745] Linking target app/dpdk-test-eventdev 00:02:56.429 [743/745] Linking target app/dpdk-testpmd 00:02:57.363 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.363 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:57.363 20:08:35 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:02:57.363 20:08:35 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:57.363 20:08:35 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:57.620 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:57.621 [0/1] Installing files. 00:02:57.883 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:57.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:57.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:57.889 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.889 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.890 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.461 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.461 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.461 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.461 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.461 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.461 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.461 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.461 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.461 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.461 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.461 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.461 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.461 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.461 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.461 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.461 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.461 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:58.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:58.465 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:58.465 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:58.465 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:58.465 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:58.465 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:58.465 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:58.465 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:58.465 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:58.465 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:58.465 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:58.465 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:58.465 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:58.465 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:58.465 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:58.465 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:58.465 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:58.465 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:58.465 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:58.465 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:58.465 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:58.465 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:58.465 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:58.465 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:58.465 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:58.465 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:58.465 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:58.465 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:58.465 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:58.465 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:58.465 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:58.465 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:58.465 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:58.465 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:58.465 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:58.465 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:58.465 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:58.465 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:58.465 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:58.465 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:58.465 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:58.465 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:58.465 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:58.465 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:58.465 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:58.465 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:58.465 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:58.465 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:58.465 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:58.465 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:58.465 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:58.465 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:58.465 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:58.465 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:58.465 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:58.465 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:58.465 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:58.465 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:58.465 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:58.465 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:58.465 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:58.465 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:58.465 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:58.465 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:58.465 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:58.465 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:58.465 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:58.465 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:58.465 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:58.465 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:58.465 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:58.465 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:58.465 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:58.465 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:58.465 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:58.465 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:58.465 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:58.465 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:58.465 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:58.465 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:58.465 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:58.466 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:58.466 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:58.466 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:58.466 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:58.466 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:58.466 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:58.466 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:58.466 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:58.466 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:58.466 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:58.466 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:58.466 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:58.466 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:58.466 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:58.466 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:58.466 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:58.466 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:58.466 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:58.466 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:58.466 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:58.466 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:58.466 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:58.466 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:58.466 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:58.466 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:58.466 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:58.466 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:58.466 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:58.466 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:58.466 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:58.466 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:58.466 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:58.466 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:58.466 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:58.466 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:58.466 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:58.466 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:58.466 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:58.466 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:58.466 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:58.466 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:58.466 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:58.466 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:58.466 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:58.466 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:58.466 20:08:36 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:58.466 20:08:36 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:58.466 00:02:58.466 real 1m19.663s 00:02:58.466 user 14m19.262s 00:02:58.466 sys 1m47.260s 00:02:58.466 20:08:36 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:58.466 20:08:36 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:58.466 ************************************ 00:02:58.466 END TEST build_native_dpdk 00:02:58.466 ************************************ 00:02:58.466 20:08:36 -- common/autotest_common.sh@1142 -- $ return 0 00:02:58.466 20:08:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:58.466 20:08:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:58.466 20:08:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:58.466 20:08:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:58.466 20:08:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:58.466 20:08:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:58.466 20:08:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:58.466 20:08:36 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:58.466 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:58.725 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.725 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.725 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:58.983 Using 'verbs' RDMA provider 00:03:09.518 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:19.566 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:19.566 Creating mk/config.mk...done. 00:03:19.566 Creating mk/cc.flags.mk...done. 00:03:19.566 Type 'make' to build. 00:03:19.566 20:08:56 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:19.566 20:08:56 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:19.566 20:08:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:19.566 20:08:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:19.566 ************************************ 00:03:19.566 START TEST make 00:03:19.566 ************************************ 00:03:19.566 20:08:56 make -- common/autotest_common.sh@1123 -- $ make -j48 00:03:19.566 make[1]: Nothing to be done for 'all'. 00:03:20.137 The Meson build system 00:03:20.137 Version: 1.3.1 00:03:20.137 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:20.137 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:20.137 Build type: native build 00:03:20.137 Project name: libvfio-user 00:03:20.137 Project version: 0.0.1 00:03:20.137 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:20.137 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:20.137 Host machine cpu family: x86_64 00:03:20.137 Host machine cpu: x86_64 00:03:20.137 Run-time dependency threads found: YES 00:03:20.137 Library dl found: YES 00:03:20.137 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:20.137 Run-time dependency json-c found: YES 0.17 00:03:20.137 Run-time dependency cmocka found: YES 1.1.7 00:03:20.137 Program pytest-3 found: NO 00:03:20.137 Program flake8 found: NO 00:03:20.137 Program misspell-fixer found: NO 00:03:20.137 Program restructuredtext-lint found: NO 00:03:20.137 Program valgrind found: YES (/usr/bin/valgrind) 00:03:20.137 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:20.137 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:20.137 Compiler for C supports arguments -Wwrite-strings: YES 00:03:20.137 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:20.137 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:20.137 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:20.137 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:20.137 Build targets in project: 8 00:03:20.137 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:20.137 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:20.137 00:03:20.137 libvfio-user 0.0.1 00:03:20.137 00:03:20.137 User defined options 00:03:20.137 buildtype : debug 00:03:20.137 default_library: shared 00:03:20.137 libdir : /usr/local/lib 00:03:20.137 00:03:20.137 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:20.715 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:20.977 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:20.977 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:20.977 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:20.977 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:20.977 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:20.977 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:20.977 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:20.977 [8/37] Compiling C object samples/null.p/null.c.o 00:03:21.235 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:21.235 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:21.235 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:21.235 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:21.235 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:21.235 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:21.235 [15/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:21.235 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:21.235 [17/37] Compiling C object samples/server.p/server.c.o 00:03:21.235 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:21.235 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:21.235 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:21.235 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:21.235 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:21.235 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:21.235 [24/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:21.235 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:21.235 [26/37] Compiling C object samples/client.p/client.c.o 00:03:21.235 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:21.235 [28/37] Linking target samples/client 00:03:21.500 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:21.500 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:21.500 [31/37] Linking target test/unit_tests 00:03:21.500 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:21.759 [33/37] Linking target samples/null 00:03:21.759 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:21.759 [35/37] Linking target samples/gpio-pci-idio-16 00:03:21.759 [36/37] Linking target samples/server 00:03:21.759 [37/37] Linking target samples/lspci 00:03:21.759 INFO: autodetecting backend as ninja 00:03:21.759 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:21.759 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:22.332 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:22.332 ninja: no work to do. 00:03:34.528 CC lib/log/log.o 00:03:34.528 CC lib/log/log_flags.o 00:03:34.528 CC lib/log/log_deprecated.o 00:03:34.528 CC lib/ut/ut.o 00:03:34.528 CC lib/ut_mock/mock.o 00:03:34.528 LIB libspdk_ut.a 00:03:34.528 LIB libspdk_ut_mock.a 00:03:34.528 LIB libspdk_log.a 00:03:34.528 SO libspdk_ut.so.2.0 00:03:34.528 SO libspdk_ut_mock.so.6.0 00:03:34.528 SO libspdk_log.so.7.0 00:03:34.528 SYMLINK libspdk_ut.so 00:03:34.528 SYMLINK libspdk_ut_mock.so 00:03:34.528 SYMLINK libspdk_log.so 00:03:34.528 CC lib/dma/dma.o 00:03:34.528 CXX lib/trace_parser/trace.o 00:03:34.528 CC lib/ioat/ioat.o 00:03:34.528 CC lib/util/base64.o 00:03:34.528 CC lib/util/bit_array.o 00:03:34.528 CC lib/util/cpuset.o 00:03:34.528 CC lib/util/crc16.o 00:03:34.528 CC lib/util/crc32.o 00:03:34.528 CC lib/util/crc32c.o 00:03:34.528 CC lib/util/crc32_ieee.o 00:03:34.528 CC lib/util/crc64.o 00:03:34.528 CC lib/util/dif.o 00:03:34.528 CC lib/util/fd.o 00:03:34.528 CC lib/util/file.o 00:03:34.528 CC lib/util/hexlify.o 00:03:34.528 CC lib/util/iov.o 00:03:34.528 CC lib/util/math.o 00:03:34.528 CC lib/util/pipe.o 00:03:34.528 CC lib/util/strerror_tls.o 00:03:34.528 CC lib/util/string.o 00:03:34.528 CC lib/util/uuid.o 00:03:34.528 CC lib/util/fd_group.o 00:03:34.528 CC lib/util/xor.o 00:03:34.528 CC lib/util/zipf.o 00:03:34.786 CC lib/vfio_user/host/vfio_user_pci.o 00:03:34.786 CC lib/vfio_user/host/vfio_user.o 00:03:34.786 LIB libspdk_dma.a 00:03:34.786 SO libspdk_dma.so.4.0 00:03:34.786 SYMLINK libspdk_dma.so 00:03:34.786 LIB libspdk_ioat.a 00:03:35.042 SO libspdk_ioat.so.7.0 00:03:35.042 LIB libspdk_vfio_user.a 00:03:35.042 SYMLINK libspdk_ioat.so 00:03:35.042 SO libspdk_vfio_user.so.5.0 00:03:35.042 SYMLINK libspdk_vfio_user.so 00:03:35.300 LIB libspdk_util.a 00:03:35.300 SO libspdk_util.so.9.1 00:03:35.300 SYMLINK libspdk_util.so 00:03:35.559 CC lib/rdma_provider/common.o 00:03:35.559 CC lib/idxd/idxd.o 00:03:35.559 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:35.559 CC lib/vmd/vmd.o 00:03:35.559 CC lib/rdma_utils/rdma_utils.o 00:03:35.559 CC lib/env_dpdk/env.o 00:03:35.559 CC lib/idxd/idxd_user.o 00:03:35.559 CC lib/env_dpdk/memory.o 00:03:35.559 CC lib/vmd/led.o 00:03:35.559 CC lib/idxd/idxd_kernel.o 00:03:35.559 CC lib/env_dpdk/pci.o 00:03:35.559 CC lib/conf/conf.o 00:03:35.559 CC lib/env_dpdk/init.o 00:03:35.559 CC lib/json/json_parse.o 00:03:35.559 CC lib/env_dpdk/threads.o 00:03:35.559 CC lib/env_dpdk/pci_ioat.o 00:03:35.559 CC lib/json/json_util.o 00:03:35.559 CC lib/env_dpdk/pci_virtio.o 00:03:35.559 CC lib/json/json_write.o 00:03:35.559 CC lib/env_dpdk/pci_vmd.o 00:03:35.559 CC lib/env_dpdk/pci_idxd.o 00:03:35.559 CC lib/env_dpdk/pci_event.o 00:03:35.559 CC lib/env_dpdk/sigbus_handler.o 00:03:35.559 CC lib/env_dpdk/pci_dpdk.o 00:03:35.559 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:35.559 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:35.559 LIB libspdk_trace_parser.a 00:03:35.559 SO libspdk_trace_parser.so.5.0 00:03:35.817 SYMLINK libspdk_trace_parser.so 00:03:35.817 LIB libspdk_conf.a 00:03:35.817 SO libspdk_conf.so.6.0 00:03:35.817 LIB libspdk_rdma_utils.a 00:03:35.817 LIB libspdk_rdma_provider.a 00:03:35.817 LIB libspdk_json.a 00:03:35.817 SO libspdk_rdma_utils.so.1.0 00:03:35.817 SO libspdk_rdma_provider.so.6.0 00:03:35.817 SYMLINK libspdk_conf.so 00:03:35.817 SO libspdk_json.so.6.0 00:03:35.817 SYMLINK libspdk_rdma_utils.so 00:03:36.075 SYMLINK libspdk_rdma_provider.so 00:03:36.075 SYMLINK libspdk_json.so 00:03:36.075 LIB libspdk_idxd.a 00:03:36.075 CC lib/jsonrpc/jsonrpc_server.o 00:03:36.075 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:36.075 CC lib/jsonrpc/jsonrpc_client.o 00:03:36.075 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:36.075 SO libspdk_idxd.so.12.0 00:03:36.334 SYMLINK libspdk_idxd.so 00:03:36.334 LIB libspdk_vmd.a 00:03:36.334 SO libspdk_vmd.so.6.0 00:03:36.334 SYMLINK libspdk_vmd.so 00:03:36.334 LIB libspdk_jsonrpc.a 00:03:36.334 SO libspdk_jsonrpc.so.6.0 00:03:36.592 SYMLINK libspdk_jsonrpc.so 00:03:36.592 CC lib/rpc/rpc.o 00:03:36.851 LIB libspdk_rpc.a 00:03:36.851 SO libspdk_rpc.so.6.0 00:03:36.851 SYMLINK libspdk_rpc.so 00:03:37.109 CC lib/notify/notify.o 00:03:37.109 CC lib/keyring/keyring.o 00:03:37.109 CC lib/trace/trace.o 00:03:37.109 CC lib/notify/notify_rpc.o 00:03:37.109 CC lib/keyring/keyring_rpc.o 00:03:37.109 CC lib/trace/trace_flags.o 00:03:37.109 CC lib/trace/trace_rpc.o 00:03:37.367 LIB libspdk_notify.a 00:03:37.367 SO libspdk_notify.so.6.0 00:03:37.367 LIB libspdk_keyring.a 00:03:37.367 SYMLINK libspdk_notify.so 00:03:37.367 LIB libspdk_trace.a 00:03:37.367 SO libspdk_keyring.so.1.0 00:03:37.367 SO libspdk_trace.so.10.0 00:03:37.367 SYMLINK libspdk_keyring.so 00:03:37.367 SYMLINK libspdk_trace.so 00:03:37.626 LIB libspdk_env_dpdk.a 00:03:37.626 SO libspdk_env_dpdk.so.14.1 00:03:37.626 CC lib/thread/thread.o 00:03:37.626 CC lib/thread/iobuf.o 00:03:37.626 CC lib/sock/sock.o 00:03:37.626 CC lib/sock/sock_rpc.o 00:03:37.884 SYMLINK libspdk_env_dpdk.so 00:03:38.143 LIB libspdk_sock.a 00:03:38.143 SO libspdk_sock.so.10.0 00:03:38.143 SYMLINK libspdk_sock.so 00:03:38.143 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:38.143 CC lib/nvme/nvme_ctrlr.o 00:03:38.143 CC lib/nvme/nvme_fabric.o 00:03:38.143 CC lib/nvme/nvme_ns_cmd.o 00:03:38.143 CC lib/nvme/nvme_ns.o 00:03:38.143 CC lib/nvme/nvme_pcie_common.o 00:03:38.143 CC lib/nvme/nvme_pcie.o 00:03:38.143 CC lib/nvme/nvme_qpair.o 00:03:38.143 CC lib/nvme/nvme.o 00:03:38.143 CC lib/nvme/nvme_quirks.o 00:03:38.143 CC lib/nvme/nvme_transport.o 00:03:38.143 CC lib/nvme/nvme_discovery.o 00:03:38.143 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:38.143 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:38.143 CC lib/nvme/nvme_tcp.o 00:03:38.143 CC lib/nvme/nvme_opal.o 00:03:38.143 CC lib/nvme/nvme_io_msg.o 00:03:38.143 CC lib/nvme/nvme_poll_group.o 00:03:38.143 CC lib/nvme/nvme_zns.o 00:03:38.143 CC lib/nvme/nvme_stubs.o 00:03:38.143 CC lib/nvme/nvme_auth.o 00:03:38.143 CC lib/nvme/nvme_cuse.o 00:03:38.143 CC lib/nvme/nvme_rdma.o 00:03:38.143 CC lib/nvme/nvme_vfio_user.o 00:03:39.076 LIB libspdk_thread.a 00:03:39.334 SO libspdk_thread.so.10.1 00:03:39.334 SYMLINK libspdk_thread.so 00:03:39.334 CC lib/blob/blobstore.o 00:03:39.334 CC lib/vfu_tgt/tgt_endpoint.o 00:03:39.334 CC lib/init/json_config.o 00:03:39.334 CC lib/virtio/virtio.o 00:03:39.334 CC lib/accel/accel.o 00:03:39.334 CC lib/virtio/virtio_vhost_user.o 00:03:39.334 CC lib/vfu_tgt/tgt_rpc.o 00:03:39.334 CC lib/blob/request.o 00:03:39.334 CC lib/init/subsystem.o 00:03:39.334 CC lib/accel/accel_rpc.o 00:03:39.334 CC lib/accel/accel_sw.o 00:03:39.334 CC lib/init/subsystem_rpc.o 00:03:39.334 CC lib/virtio/virtio_vfio_user.o 00:03:39.334 CC lib/blob/zeroes.o 00:03:39.334 CC lib/init/rpc.o 00:03:39.334 CC lib/virtio/virtio_pci.o 00:03:39.334 CC lib/blob/blob_bs_dev.o 00:03:39.898 LIB libspdk_init.a 00:03:39.898 SO libspdk_init.so.5.0 00:03:39.898 LIB libspdk_virtio.a 00:03:39.898 LIB libspdk_vfu_tgt.a 00:03:39.898 SYMLINK libspdk_init.so 00:03:39.898 SO libspdk_virtio.so.7.0 00:03:39.898 SO libspdk_vfu_tgt.so.3.0 00:03:39.898 SYMLINK libspdk_vfu_tgt.so 00:03:39.898 SYMLINK libspdk_virtio.so 00:03:39.898 CC lib/event/app.o 00:03:39.898 CC lib/event/reactor.o 00:03:39.898 CC lib/event/log_rpc.o 00:03:39.898 CC lib/event/app_rpc.o 00:03:39.898 CC lib/event/scheduler_static.o 00:03:40.553 LIB libspdk_event.a 00:03:40.553 SO libspdk_event.so.14.0 00:03:40.553 LIB libspdk_accel.a 00:03:40.553 SYMLINK libspdk_event.so 00:03:40.553 SO libspdk_accel.so.15.1 00:03:40.553 LIB libspdk_nvme.a 00:03:40.553 SYMLINK libspdk_accel.so 00:03:40.812 SO libspdk_nvme.so.13.1 00:03:40.812 CC lib/bdev/bdev.o 00:03:40.812 CC lib/bdev/bdev_rpc.o 00:03:40.812 CC lib/bdev/bdev_zone.o 00:03:40.812 CC lib/bdev/part.o 00:03:40.812 CC lib/bdev/scsi_nvme.o 00:03:41.070 SYMLINK libspdk_nvme.so 00:03:42.440 LIB libspdk_blob.a 00:03:42.440 SO libspdk_blob.so.11.0 00:03:42.440 SYMLINK libspdk_blob.so 00:03:42.698 CC lib/blobfs/blobfs.o 00:03:42.698 CC lib/blobfs/tree.o 00:03:42.698 CC lib/lvol/lvol.o 00:03:43.262 LIB libspdk_bdev.a 00:03:43.262 SO libspdk_bdev.so.15.1 00:03:43.525 SYMLINK libspdk_bdev.so 00:03:43.525 LIB libspdk_blobfs.a 00:03:43.525 SO libspdk_blobfs.so.10.0 00:03:43.525 SYMLINK libspdk_blobfs.so 00:03:43.525 CC lib/ublk/ublk.o 00:03:43.525 CC lib/nvmf/ctrlr.o 00:03:43.525 CC lib/scsi/dev.o 00:03:43.525 CC lib/nvmf/ctrlr_discovery.o 00:03:43.525 CC lib/nbd/nbd.o 00:03:43.525 CC lib/ublk/ublk_rpc.o 00:03:43.525 CC lib/scsi/lun.o 00:03:43.525 CC lib/nvmf/ctrlr_bdev.o 00:03:43.525 CC lib/nbd/nbd_rpc.o 00:03:43.525 CC lib/scsi/port.o 00:03:43.525 CC lib/ftl/ftl_core.o 00:03:43.525 CC lib/nvmf/subsystem.o 00:03:43.525 CC lib/ftl/ftl_init.o 00:03:43.525 CC lib/scsi/scsi.o 00:03:43.525 CC lib/nvmf/nvmf.o 00:03:43.525 CC lib/scsi/scsi_bdev.o 00:03:43.525 CC lib/ftl/ftl_layout.o 00:03:43.525 CC lib/scsi/scsi_pr.o 00:03:43.525 CC lib/nvmf/nvmf_rpc.o 00:03:43.525 CC lib/nvmf/transport.o 00:03:43.525 CC lib/scsi/scsi_rpc.o 00:03:43.525 CC lib/ftl/ftl_debug.o 00:03:43.525 CC lib/ftl/ftl_io.o 00:03:43.525 CC lib/nvmf/tcp.o 00:03:43.525 CC lib/nvmf/stubs.o 00:03:43.525 CC lib/scsi/task.o 00:03:43.525 CC lib/ftl/ftl_sb.o 00:03:43.525 CC lib/nvmf/mdns_server.o 00:03:43.525 CC lib/nvmf/vfio_user.o 00:03:43.525 CC lib/ftl/ftl_l2p.o 00:03:43.525 CC lib/ftl/ftl_l2p_flat.o 00:03:43.525 CC lib/nvmf/rdma.o 00:03:43.525 CC lib/ftl/ftl_nv_cache.o 00:03:43.525 CC lib/nvmf/auth.o 00:03:43.525 CC lib/ftl/ftl_band.o 00:03:43.525 CC lib/ftl/ftl_band_ops.o 00:03:43.525 CC lib/ftl/ftl_writer.o 00:03:43.525 CC lib/ftl/ftl_rq.o 00:03:43.525 CC lib/ftl/ftl_reloc.o 00:03:43.525 CC lib/ftl/ftl_l2p_cache.o 00:03:43.525 CC lib/ftl/ftl_p2l.o 00:03:43.525 CC lib/ftl/mngt/ftl_mngt.o 00:03:43.525 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:43.525 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:43.525 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:43.525 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:43.525 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:43.785 LIB libspdk_lvol.a 00:03:43.785 SO libspdk_lvol.so.10.0 00:03:44.046 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:44.046 SYMLINK libspdk_lvol.so 00:03:44.046 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:44.046 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:44.046 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:44.046 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:44.046 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:44.046 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:44.046 CC lib/ftl/utils/ftl_conf.o 00:03:44.046 CC lib/ftl/utils/ftl_md.o 00:03:44.047 CC lib/ftl/utils/ftl_mempool.o 00:03:44.047 CC lib/ftl/utils/ftl_bitmap.o 00:03:44.047 CC lib/ftl/utils/ftl_property.o 00:03:44.047 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:44.047 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:44.047 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:44.047 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:44.047 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:44.047 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:44.047 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:44.308 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:44.308 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:44.308 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:44.308 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:44.308 CC lib/ftl/base/ftl_base_dev.o 00:03:44.308 CC lib/ftl/base/ftl_base_bdev.o 00:03:44.308 CC lib/ftl/ftl_trace.o 00:03:44.308 LIB libspdk_nbd.a 00:03:44.566 SO libspdk_nbd.so.7.0 00:03:44.566 LIB libspdk_scsi.a 00:03:44.566 SYMLINK libspdk_nbd.so 00:03:44.566 SO libspdk_scsi.so.9.0 00:03:44.566 LIB libspdk_ublk.a 00:03:44.566 SO libspdk_ublk.so.3.0 00:03:44.566 SYMLINK libspdk_scsi.so 00:03:44.826 SYMLINK libspdk_ublk.so 00:03:44.826 CC lib/vhost/vhost.o 00:03:44.826 CC lib/iscsi/conn.o 00:03:44.826 CC lib/iscsi/init_grp.o 00:03:44.826 CC lib/vhost/vhost_rpc.o 00:03:44.826 CC lib/vhost/vhost_scsi.o 00:03:44.826 CC lib/iscsi/iscsi.o 00:03:44.826 CC lib/iscsi/md5.o 00:03:44.826 CC lib/vhost/vhost_blk.o 00:03:44.826 CC lib/vhost/rte_vhost_user.o 00:03:44.826 CC lib/iscsi/param.o 00:03:44.826 CC lib/iscsi/portal_grp.o 00:03:44.826 CC lib/iscsi/tgt_node.o 00:03:44.826 CC lib/iscsi/iscsi_subsystem.o 00:03:44.826 CC lib/iscsi/iscsi_rpc.o 00:03:44.826 CC lib/iscsi/task.o 00:03:45.085 LIB libspdk_ftl.a 00:03:45.085 SO libspdk_ftl.so.9.0 00:03:45.652 SYMLINK libspdk_ftl.so 00:03:46.219 LIB libspdk_vhost.a 00:03:46.219 SO libspdk_vhost.so.8.0 00:03:46.219 LIB libspdk_nvmf.a 00:03:46.219 SYMLINK libspdk_vhost.so 00:03:46.219 SO libspdk_nvmf.so.19.0 00:03:46.219 LIB libspdk_iscsi.a 00:03:46.219 SO libspdk_iscsi.so.8.0 00:03:46.477 SYMLINK libspdk_nvmf.so 00:03:46.477 SYMLINK libspdk_iscsi.so 00:03:46.736 CC module/vfu_device/vfu_virtio.o 00:03:46.736 CC module/vfu_device/vfu_virtio_blk.o 00:03:46.736 CC module/vfu_device/vfu_virtio_scsi.o 00:03:46.736 CC module/vfu_device/vfu_virtio_rpc.o 00:03:46.736 CC module/env_dpdk/env_dpdk_rpc.o 00:03:46.736 CC module/accel/error/accel_error.o 00:03:46.736 CC module/keyring/file/keyring.o 00:03:46.736 CC module/accel/ioat/accel_ioat.o 00:03:46.736 CC module/accel/error/accel_error_rpc.o 00:03:46.736 CC module/keyring/file/keyring_rpc.o 00:03:46.736 CC module/accel/ioat/accel_ioat_rpc.o 00:03:46.736 CC module/keyring/linux/keyring.o 00:03:46.736 CC module/blob/bdev/blob_bdev.o 00:03:46.736 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:46.736 CC module/accel/iaa/accel_iaa.o 00:03:46.736 CC module/scheduler/gscheduler/gscheduler.o 00:03:46.736 CC module/keyring/linux/keyring_rpc.o 00:03:46.736 CC module/accel/iaa/accel_iaa_rpc.o 00:03:46.736 CC module/accel/dsa/accel_dsa.o 00:03:46.736 CC module/accel/dsa/accel_dsa_rpc.o 00:03:46.736 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:46.736 CC module/sock/posix/posix.o 00:03:46.994 LIB libspdk_env_dpdk_rpc.a 00:03:46.994 SO libspdk_env_dpdk_rpc.so.6.0 00:03:46.994 SYMLINK libspdk_env_dpdk_rpc.so 00:03:46.994 LIB libspdk_keyring_linux.a 00:03:46.994 LIB libspdk_keyring_file.a 00:03:46.994 LIB libspdk_scheduler_gscheduler.a 00:03:46.994 LIB libspdk_scheduler_dpdk_governor.a 00:03:46.994 SO libspdk_keyring_linux.so.1.0 00:03:46.994 SO libspdk_keyring_file.so.1.0 00:03:46.994 LIB libspdk_accel_error.a 00:03:46.994 SO libspdk_scheduler_gscheduler.so.4.0 00:03:46.994 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:46.994 LIB libspdk_accel_ioat.a 00:03:46.994 LIB libspdk_scheduler_dynamic.a 00:03:46.994 SO libspdk_accel_error.so.2.0 00:03:46.994 LIB libspdk_accel_iaa.a 00:03:46.994 SO libspdk_accel_ioat.so.6.0 00:03:46.994 SYMLINK libspdk_keyring_linux.so 00:03:46.994 SYMLINK libspdk_keyring_file.so 00:03:46.994 SO libspdk_scheduler_dynamic.so.4.0 00:03:46.994 SYMLINK libspdk_scheduler_gscheduler.so 00:03:46.994 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:46.994 SO libspdk_accel_iaa.so.3.0 00:03:47.251 SYMLINK libspdk_accel_error.so 00:03:47.251 LIB libspdk_accel_dsa.a 00:03:47.251 SYMLINK libspdk_accel_ioat.so 00:03:47.251 LIB libspdk_blob_bdev.a 00:03:47.251 SYMLINK libspdk_scheduler_dynamic.so 00:03:47.251 SO libspdk_accel_dsa.so.5.0 00:03:47.251 SYMLINK libspdk_accel_iaa.so 00:03:47.251 SO libspdk_blob_bdev.so.11.0 00:03:47.251 SYMLINK libspdk_accel_dsa.so 00:03:47.251 SYMLINK libspdk_blob_bdev.so 00:03:47.511 LIB libspdk_vfu_device.a 00:03:47.511 SO libspdk_vfu_device.so.3.0 00:03:47.511 CC module/bdev/malloc/bdev_malloc.o 00:03:47.511 CC module/bdev/null/bdev_null.o 00:03:47.511 CC module/bdev/error/vbdev_error.o 00:03:47.511 CC module/bdev/lvol/vbdev_lvol.o 00:03:47.511 CC module/blobfs/bdev/blobfs_bdev.o 00:03:47.511 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:47.511 CC module/bdev/error/vbdev_error_rpc.o 00:03:47.511 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:47.511 CC module/bdev/null/bdev_null_rpc.o 00:03:47.511 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:47.511 CC module/bdev/nvme/bdev_nvme.o 00:03:47.511 CC module/bdev/passthru/vbdev_passthru.o 00:03:47.511 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:47.511 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:47.511 CC module/bdev/aio/bdev_aio.o 00:03:47.511 CC module/bdev/raid/bdev_raid.o 00:03:47.511 CC module/bdev/split/vbdev_split.o 00:03:47.511 CC module/bdev/nvme/nvme_rpc.o 00:03:47.511 CC module/bdev/delay/vbdev_delay.o 00:03:47.511 CC module/bdev/aio/bdev_aio_rpc.o 00:03:47.511 CC module/bdev/gpt/gpt.o 00:03:47.511 CC module/bdev/nvme/bdev_mdns_client.o 00:03:47.511 CC module/bdev/raid/bdev_raid_rpc.o 00:03:47.511 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:47.511 CC module/bdev/split/vbdev_split_rpc.o 00:03:47.511 CC module/bdev/raid/bdev_raid_sb.o 00:03:47.511 CC module/bdev/nvme/vbdev_opal.o 00:03:47.511 CC module/bdev/gpt/vbdev_gpt.o 00:03:47.511 CC module/bdev/iscsi/bdev_iscsi.o 00:03:47.511 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:47.511 CC module/bdev/raid/raid0.o 00:03:47.511 CC module/bdev/raid/raid1.o 00:03:47.511 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:47.511 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:47.511 CC module/bdev/raid/concat.o 00:03:47.511 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:47.511 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:47.511 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:47.511 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:47.511 CC module/bdev/ftl/bdev_ftl.o 00:03:47.511 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:47.511 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:47.511 SYMLINK libspdk_vfu_device.so 00:03:47.771 LIB libspdk_sock_posix.a 00:03:47.771 SO libspdk_sock_posix.so.6.0 00:03:47.771 SYMLINK libspdk_sock_posix.so 00:03:47.771 LIB libspdk_blobfs_bdev.a 00:03:47.771 SO libspdk_blobfs_bdev.so.6.0 00:03:48.028 LIB libspdk_bdev_ftl.a 00:03:48.028 LIB libspdk_bdev_null.a 00:03:48.028 SO libspdk_bdev_ftl.so.6.0 00:03:48.028 LIB libspdk_bdev_split.a 00:03:48.028 SYMLINK libspdk_blobfs_bdev.so 00:03:48.028 SO libspdk_bdev_null.so.6.0 00:03:48.028 LIB libspdk_bdev_error.a 00:03:48.028 SO libspdk_bdev_split.so.6.0 00:03:48.028 SYMLINK libspdk_bdev_ftl.so 00:03:48.028 LIB libspdk_bdev_gpt.a 00:03:48.028 SO libspdk_bdev_error.so.6.0 00:03:48.028 LIB libspdk_bdev_aio.a 00:03:48.028 SYMLINK libspdk_bdev_null.so 00:03:48.028 LIB libspdk_bdev_passthru.a 00:03:48.028 SO libspdk_bdev_gpt.so.6.0 00:03:48.028 SYMLINK libspdk_bdev_split.so 00:03:48.028 SO libspdk_bdev_aio.so.6.0 00:03:48.028 LIB libspdk_bdev_zone_block.a 00:03:48.028 SO libspdk_bdev_passthru.so.6.0 00:03:48.028 LIB libspdk_bdev_delay.a 00:03:48.028 SYMLINK libspdk_bdev_error.so 00:03:48.028 LIB libspdk_bdev_malloc.a 00:03:48.028 SO libspdk_bdev_zone_block.so.6.0 00:03:48.028 SO libspdk_bdev_delay.so.6.0 00:03:48.028 SYMLINK libspdk_bdev_gpt.so 00:03:48.028 SO libspdk_bdev_malloc.so.6.0 00:03:48.028 SYMLINK libspdk_bdev_aio.so 00:03:48.028 SYMLINK libspdk_bdev_passthru.so 00:03:48.028 LIB libspdk_bdev_iscsi.a 00:03:48.028 LIB libspdk_bdev_virtio.a 00:03:48.028 SYMLINK libspdk_bdev_delay.so 00:03:48.028 SYMLINK libspdk_bdev_zone_block.so 00:03:48.028 SYMLINK libspdk_bdev_malloc.so 00:03:48.287 SO libspdk_bdev_iscsi.so.6.0 00:03:48.287 SO libspdk_bdev_virtio.so.6.0 00:03:48.287 SYMLINK libspdk_bdev_iscsi.so 00:03:48.287 SYMLINK libspdk_bdev_virtio.so 00:03:48.287 LIB libspdk_bdev_lvol.a 00:03:48.287 SO libspdk_bdev_lvol.so.6.0 00:03:48.287 SYMLINK libspdk_bdev_lvol.so 00:03:48.546 LIB libspdk_bdev_raid.a 00:03:48.546 SO libspdk_bdev_raid.so.6.0 00:03:48.804 SYMLINK libspdk_bdev_raid.so 00:03:50.176 LIB libspdk_bdev_nvme.a 00:03:50.177 SO libspdk_bdev_nvme.so.7.0 00:03:50.177 SYMLINK libspdk_bdev_nvme.so 00:03:50.435 CC module/event/subsystems/sock/sock.o 00:03:50.435 CC module/event/subsystems/vmd/vmd.o 00:03:50.435 CC module/event/subsystems/iobuf/iobuf.o 00:03:50.435 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:50.435 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:50.435 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:50.435 CC module/event/subsystems/keyring/keyring.o 00:03:50.435 CC module/event/subsystems/scheduler/scheduler.o 00:03:50.435 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:50.435 LIB libspdk_event_keyring.a 00:03:50.435 LIB libspdk_event_vhost_blk.a 00:03:50.435 LIB libspdk_event_scheduler.a 00:03:50.435 LIB libspdk_event_vfu_tgt.a 00:03:50.435 LIB libspdk_event_vmd.a 00:03:50.435 LIB libspdk_event_sock.a 00:03:50.435 SO libspdk_event_keyring.so.1.0 00:03:50.435 LIB libspdk_event_iobuf.a 00:03:50.435 SO libspdk_event_vhost_blk.so.3.0 00:03:50.435 SO libspdk_event_scheduler.so.4.0 00:03:50.435 SO libspdk_event_sock.so.5.0 00:03:50.435 SO libspdk_event_vfu_tgt.so.3.0 00:03:50.435 SO libspdk_event_vmd.so.6.0 00:03:50.695 SO libspdk_event_iobuf.so.3.0 00:03:50.695 SYMLINK libspdk_event_keyring.so 00:03:50.695 SYMLINK libspdk_event_vhost_blk.so 00:03:50.695 SYMLINK libspdk_event_scheduler.so 00:03:50.695 SYMLINK libspdk_event_sock.so 00:03:50.695 SYMLINK libspdk_event_vfu_tgt.so 00:03:50.695 SYMLINK libspdk_event_vmd.so 00:03:50.695 SYMLINK libspdk_event_iobuf.so 00:03:50.695 CC module/event/subsystems/accel/accel.o 00:03:50.954 LIB libspdk_event_accel.a 00:03:50.954 SO libspdk_event_accel.so.6.0 00:03:50.954 SYMLINK libspdk_event_accel.so 00:03:51.214 CC module/event/subsystems/bdev/bdev.o 00:03:51.473 LIB libspdk_event_bdev.a 00:03:51.473 SO libspdk_event_bdev.so.6.0 00:03:51.473 SYMLINK libspdk_event_bdev.so 00:03:51.730 CC module/event/subsystems/ublk/ublk.o 00:03:51.730 CC module/event/subsystems/scsi/scsi.o 00:03:51.730 CC module/event/subsystems/nbd/nbd.o 00:03:51.730 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:51.730 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:51.730 LIB libspdk_event_nbd.a 00:03:51.730 LIB libspdk_event_ublk.a 00:03:51.730 LIB libspdk_event_scsi.a 00:03:51.730 SO libspdk_event_nbd.so.6.0 00:03:51.730 SO libspdk_event_ublk.so.3.0 00:03:51.730 SO libspdk_event_scsi.so.6.0 00:03:51.730 SYMLINK libspdk_event_nbd.so 00:03:51.730 SYMLINK libspdk_event_ublk.so 00:03:51.989 LIB libspdk_event_nvmf.a 00:03:51.989 SYMLINK libspdk_event_scsi.so 00:03:51.989 SO libspdk_event_nvmf.so.6.0 00:03:51.989 SYMLINK libspdk_event_nvmf.so 00:03:51.989 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:51.989 CC module/event/subsystems/iscsi/iscsi.o 00:03:52.248 LIB libspdk_event_vhost_scsi.a 00:03:52.248 LIB libspdk_event_iscsi.a 00:03:52.248 SO libspdk_event_vhost_scsi.so.3.0 00:03:52.248 SO libspdk_event_iscsi.so.6.0 00:03:52.248 SYMLINK libspdk_event_vhost_scsi.so 00:03:52.248 SYMLINK libspdk_event_iscsi.so 00:03:52.248 SO libspdk.so.6.0 00:03:52.248 SYMLINK libspdk.so 00:03:52.511 CC app/trace_record/trace_record.o 00:03:52.511 CXX app/trace/trace.o 00:03:52.511 CC app/spdk_top/spdk_top.o 00:03:52.511 CC app/spdk_lspci/spdk_lspci.o 00:03:52.511 TEST_HEADER include/spdk/accel.h 00:03:52.511 CC app/spdk_nvme_perf/perf.o 00:03:52.511 TEST_HEADER include/spdk/accel_module.h 00:03:52.511 TEST_HEADER include/spdk/assert.h 00:03:52.511 CC app/spdk_nvme_discover/discovery_aer.o 00:03:52.511 TEST_HEADER include/spdk/barrier.h 00:03:52.511 TEST_HEADER include/spdk/base64.h 00:03:52.511 TEST_HEADER include/spdk/bdev.h 00:03:52.511 CC test/rpc_client/rpc_client_test.o 00:03:52.511 TEST_HEADER include/spdk/bdev_module.h 00:03:52.511 TEST_HEADER include/spdk/bdev_zone.h 00:03:52.511 TEST_HEADER include/spdk/bit_pool.h 00:03:52.511 TEST_HEADER include/spdk/bit_array.h 00:03:52.511 TEST_HEADER include/spdk/blob_bdev.h 00:03:52.511 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:52.511 CC app/spdk_nvme_identify/identify.o 00:03:52.511 TEST_HEADER include/spdk/blobfs.h 00:03:52.511 TEST_HEADER include/spdk/blob.h 00:03:52.511 TEST_HEADER include/spdk/conf.h 00:03:52.511 TEST_HEADER include/spdk/config.h 00:03:52.511 TEST_HEADER include/spdk/cpuset.h 00:03:52.511 TEST_HEADER include/spdk/crc16.h 00:03:52.511 TEST_HEADER include/spdk/crc32.h 00:03:52.511 TEST_HEADER include/spdk/crc64.h 00:03:52.511 TEST_HEADER include/spdk/dif.h 00:03:52.511 TEST_HEADER include/spdk/dma.h 00:03:52.511 TEST_HEADER include/spdk/endian.h 00:03:52.511 TEST_HEADER include/spdk/env_dpdk.h 00:03:52.511 TEST_HEADER include/spdk/event.h 00:03:52.511 TEST_HEADER include/spdk/env.h 00:03:52.511 TEST_HEADER include/spdk/fd_group.h 00:03:52.511 TEST_HEADER include/spdk/fd.h 00:03:52.511 TEST_HEADER include/spdk/file.h 00:03:52.511 TEST_HEADER include/spdk/ftl.h 00:03:52.511 TEST_HEADER include/spdk/gpt_spec.h 00:03:52.511 TEST_HEADER include/spdk/hexlify.h 00:03:52.511 TEST_HEADER include/spdk/histogram_data.h 00:03:52.511 TEST_HEADER include/spdk/idxd_spec.h 00:03:52.511 TEST_HEADER include/spdk/idxd.h 00:03:52.511 TEST_HEADER include/spdk/init.h 00:03:52.511 TEST_HEADER include/spdk/ioat.h 00:03:52.511 TEST_HEADER include/spdk/ioat_spec.h 00:03:52.511 TEST_HEADER include/spdk/iscsi_spec.h 00:03:52.511 TEST_HEADER include/spdk/json.h 00:03:52.511 TEST_HEADER include/spdk/jsonrpc.h 00:03:52.511 TEST_HEADER include/spdk/keyring.h 00:03:52.511 TEST_HEADER include/spdk/keyring_module.h 00:03:52.511 TEST_HEADER include/spdk/likely.h 00:03:52.511 TEST_HEADER include/spdk/log.h 00:03:52.511 TEST_HEADER include/spdk/lvol.h 00:03:52.511 TEST_HEADER include/spdk/memory.h 00:03:52.511 TEST_HEADER include/spdk/mmio.h 00:03:52.511 TEST_HEADER include/spdk/nbd.h 00:03:52.511 TEST_HEADER include/spdk/notify.h 00:03:52.511 TEST_HEADER include/spdk/nvme.h 00:03:52.511 TEST_HEADER include/spdk/nvme_intel.h 00:03:52.511 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:52.511 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:52.511 TEST_HEADER include/spdk/nvme_spec.h 00:03:52.511 TEST_HEADER include/spdk/nvme_zns.h 00:03:52.511 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:52.511 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:52.511 TEST_HEADER include/spdk/nvmf_spec.h 00:03:52.511 TEST_HEADER include/spdk/nvmf.h 00:03:52.511 TEST_HEADER include/spdk/nvmf_transport.h 00:03:52.511 TEST_HEADER include/spdk/opal.h 00:03:52.511 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:52.511 TEST_HEADER include/spdk/opal_spec.h 00:03:52.511 TEST_HEADER include/spdk/pci_ids.h 00:03:52.511 TEST_HEADER include/spdk/pipe.h 00:03:52.511 TEST_HEADER include/spdk/queue.h 00:03:52.511 TEST_HEADER include/spdk/reduce.h 00:03:52.511 TEST_HEADER include/spdk/rpc.h 00:03:52.511 TEST_HEADER include/spdk/scheduler.h 00:03:52.511 TEST_HEADER include/spdk/scsi.h 00:03:52.511 TEST_HEADER include/spdk/scsi_spec.h 00:03:52.511 TEST_HEADER include/spdk/sock.h 00:03:52.511 TEST_HEADER include/spdk/stdinc.h 00:03:52.511 TEST_HEADER include/spdk/string.h 00:03:52.511 TEST_HEADER include/spdk/thread.h 00:03:52.511 TEST_HEADER include/spdk/trace.h 00:03:52.511 TEST_HEADER include/spdk/trace_parser.h 00:03:52.511 TEST_HEADER include/spdk/tree.h 00:03:52.511 TEST_HEADER include/spdk/ublk.h 00:03:52.511 TEST_HEADER include/spdk/uuid.h 00:03:52.511 TEST_HEADER include/spdk/util.h 00:03:52.511 TEST_HEADER include/spdk/version.h 00:03:52.511 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:52.511 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:52.511 TEST_HEADER include/spdk/vhost.h 00:03:52.511 TEST_HEADER include/spdk/vmd.h 00:03:52.511 TEST_HEADER include/spdk/xor.h 00:03:52.511 TEST_HEADER include/spdk/zipf.h 00:03:52.511 CXX test/cpp_headers/accel.o 00:03:52.511 CXX test/cpp_headers/accel_module.o 00:03:52.511 CXX test/cpp_headers/assert.o 00:03:52.511 CXX test/cpp_headers/barrier.o 00:03:52.511 CXX test/cpp_headers/base64.o 00:03:52.511 CXX test/cpp_headers/bdev.o 00:03:52.511 CC app/spdk_dd/spdk_dd.o 00:03:52.511 CXX test/cpp_headers/bdev_module.o 00:03:52.511 CXX test/cpp_headers/bdev_zone.o 00:03:52.511 CXX test/cpp_headers/bit_array.o 00:03:52.511 CXX test/cpp_headers/bit_pool.o 00:03:52.511 CXX test/cpp_headers/blob_bdev.o 00:03:52.511 CXX test/cpp_headers/blobfs_bdev.o 00:03:52.511 CXX test/cpp_headers/blobfs.o 00:03:52.511 CXX test/cpp_headers/blob.o 00:03:52.511 CXX test/cpp_headers/conf.o 00:03:52.511 CC app/iscsi_tgt/iscsi_tgt.o 00:03:52.511 CXX test/cpp_headers/config.o 00:03:52.511 CXX test/cpp_headers/cpuset.o 00:03:52.511 CC app/nvmf_tgt/nvmf_main.o 00:03:52.511 CXX test/cpp_headers/crc16.o 00:03:52.782 CC examples/ioat/verify/verify.o 00:03:52.782 CC examples/util/zipf/zipf.o 00:03:52.782 CC examples/ioat/perf/perf.o 00:03:52.782 CC app/spdk_tgt/spdk_tgt.o 00:03:52.782 CXX test/cpp_headers/crc32.o 00:03:52.782 CC test/thread/poller_perf/poller_perf.o 00:03:52.782 CC test/app/jsoncat/jsoncat.o 00:03:52.782 CC test/env/memory/memory_ut.o 00:03:52.782 CC test/app/stub/stub.o 00:03:52.782 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:52.782 CC app/fio/nvme/fio_plugin.o 00:03:52.782 CC test/env/vtophys/vtophys.o 00:03:52.782 CC test/env/pci/pci_ut.o 00:03:52.782 CC test/app/histogram_perf/histogram_perf.o 00:03:52.782 CC app/fio/bdev/fio_plugin.o 00:03:52.782 CC test/app/bdev_svc/bdev_svc.o 00:03:52.782 CC test/dma/test_dma/test_dma.o 00:03:52.782 LINK spdk_lspci 00:03:53.042 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:53.042 CC test/env/mem_callbacks/mem_callbacks.o 00:03:53.042 LINK rpc_client_test 00:03:53.042 LINK spdk_nvme_discover 00:03:53.042 LINK jsoncat 00:03:53.042 LINK zipf 00:03:53.042 LINK interrupt_tgt 00:03:53.042 LINK poller_perf 00:03:53.042 LINK vtophys 00:03:53.042 CXX test/cpp_headers/crc64.o 00:03:53.042 CXX test/cpp_headers/dif.o 00:03:53.042 CXX test/cpp_headers/dma.o 00:03:53.042 LINK histogram_perf 00:03:53.042 CXX test/cpp_headers/endian.o 00:03:53.042 CXX test/cpp_headers/env_dpdk.o 00:03:53.042 LINK env_dpdk_post_init 00:03:53.042 CXX test/cpp_headers/env.o 00:03:53.042 CXX test/cpp_headers/event.o 00:03:53.042 CXX test/cpp_headers/fd_group.o 00:03:53.042 CXX test/cpp_headers/fd.o 00:03:53.042 CXX test/cpp_headers/file.o 00:03:53.042 CXX test/cpp_headers/ftl.o 00:03:53.042 LINK spdk_trace_record 00:03:53.042 LINK iscsi_tgt 00:03:53.042 LINK stub 00:03:53.042 CXX test/cpp_headers/gpt_spec.o 00:03:53.042 LINK nvmf_tgt 00:03:53.042 CXX test/cpp_headers/hexlify.o 00:03:53.042 CXX test/cpp_headers/histogram_data.o 00:03:53.042 LINK ioat_perf 00:03:53.042 CXX test/cpp_headers/idxd.o 00:03:53.042 LINK verify 00:03:53.042 CXX test/cpp_headers/idxd_spec.o 00:03:53.313 LINK spdk_tgt 00:03:53.313 LINK bdev_svc 00:03:53.313 CXX test/cpp_headers/init.o 00:03:53.313 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:53.313 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:53.313 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:53.313 CXX test/cpp_headers/ioat.o 00:03:53.313 CXX test/cpp_headers/ioat_spec.o 00:03:53.313 CXX test/cpp_headers/iscsi_spec.o 00:03:53.313 LINK mem_callbacks 00:03:53.313 CXX test/cpp_headers/json.o 00:03:53.313 LINK spdk_dd 00:03:53.313 CXX test/cpp_headers/jsonrpc.o 00:03:53.612 LINK spdk_trace 00:03:53.612 CXX test/cpp_headers/keyring.o 00:03:53.612 CXX test/cpp_headers/keyring_module.o 00:03:53.612 CXX test/cpp_headers/likely.o 00:03:53.612 CXX test/cpp_headers/log.o 00:03:53.612 CXX test/cpp_headers/lvol.o 00:03:53.612 CXX test/cpp_headers/memory.o 00:03:53.612 CXX test/cpp_headers/mmio.o 00:03:53.612 LINK pci_ut 00:03:53.612 CXX test/cpp_headers/nbd.o 00:03:53.612 CXX test/cpp_headers/notify.o 00:03:53.612 CXX test/cpp_headers/nvme.o 00:03:53.612 CXX test/cpp_headers/nvme_intel.o 00:03:53.612 CXX test/cpp_headers/nvme_ocssd.o 00:03:53.612 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:53.612 CXX test/cpp_headers/nvme_spec.o 00:03:53.612 CXX test/cpp_headers/nvme_zns.o 00:03:53.612 LINK test_dma 00:03:53.612 CXX test/cpp_headers/nvmf_cmd.o 00:03:53.612 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:53.612 CXX test/cpp_headers/nvmf.o 00:03:53.612 CXX test/cpp_headers/nvmf_spec.o 00:03:53.612 CXX test/cpp_headers/nvmf_transport.o 00:03:53.612 CXX test/cpp_headers/opal.o 00:03:53.612 CXX test/cpp_headers/opal_spec.o 00:03:53.612 CC test/event/event_perf/event_perf.o 00:03:53.612 CXX test/cpp_headers/pci_ids.o 00:03:53.612 CC test/event/reactor/reactor.o 00:03:53.612 CC examples/sock/hello_world/hello_sock.o 00:03:53.612 CC examples/thread/thread/thread_ex.o 00:03:53.612 CC test/event/reactor_perf/reactor_perf.o 00:03:53.876 CC examples/vmd/lsvmd/lsvmd.o 00:03:53.876 LINK nvme_fuzz 00:03:53.876 CC examples/vmd/led/led.o 00:03:53.876 CXX test/cpp_headers/pipe.o 00:03:53.876 CC test/event/app_repeat/app_repeat.o 00:03:53.876 CXX test/cpp_headers/queue.o 00:03:53.876 CXX test/cpp_headers/reduce.o 00:03:53.876 CXX test/cpp_headers/rpc.o 00:03:53.876 LINK spdk_nvme 00:03:53.876 CC examples/idxd/perf/perf.o 00:03:53.876 LINK spdk_bdev 00:03:53.876 CC test/event/scheduler/scheduler.o 00:03:53.876 CXX test/cpp_headers/scheduler.o 00:03:53.876 CXX test/cpp_headers/scsi.o 00:03:53.876 CXX test/cpp_headers/scsi_spec.o 00:03:53.876 CXX test/cpp_headers/sock.o 00:03:53.876 CXX test/cpp_headers/stdinc.o 00:03:53.876 CXX test/cpp_headers/string.o 00:03:53.876 CXX test/cpp_headers/thread.o 00:03:53.876 CXX test/cpp_headers/trace.o 00:03:53.876 CXX test/cpp_headers/trace_parser.o 00:03:53.876 CXX test/cpp_headers/tree.o 00:03:53.876 CXX test/cpp_headers/ublk.o 00:03:53.876 CXX test/cpp_headers/util.o 00:03:53.876 CXX test/cpp_headers/uuid.o 00:03:53.876 CXX test/cpp_headers/version.o 00:03:53.876 CXX test/cpp_headers/vfio_user_pci.o 00:03:53.876 CXX test/cpp_headers/vfio_user_spec.o 00:03:54.142 LINK event_perf 00:03:54.142 CXX test/cpp_headers/vhost.o 00:03:54.142 LINK reactor 00:03:54.142 LINK lsvmd 00:03:54.142 CXX test/cpp_headers/vmd.o 00:03:54.142 CXX test/cpp_headers/xor.o 00:03:54.142 LINK reactor_perf 00:03:54.142 CXX test/cpp_headers/zipf.o 00:03:54.142 CC app/vhost/vhost.o 00:03:54.142 LINK led 00:03:54.142 LINK vhost_fuzz 00:03:54.142 LINK app_repeat 00:03:54.142 LINK spdk_nvme_perf 00:03:54.142 LINK memory_ut 00:03:54.142 LINK hello_sock 00:03:54.142 LINK spdk_top 00:03:54.142 LINK spdk_nvme_identify 00:03:54.142 LINK thread 00:03:54.401 LINK scheduler 00:03:54.401 CC test/nvme/reset/reset.o 00:03:54.401 CC test/nvme/startup/startup.o 00:03:54.401 CC test/nvme/sgl/sgl.o 00:03:54.401 CC test/nvme/simple_copy/simple_copy.o 00:03:54.401 CC test/nvme/e2edp/nvme_dp.o 00:03:54.401 CC test/nvme/err_injection/err_injection.o 00:03:54.401 CC test/nvme/aer/aer.o 00:03:54.401 CC test/nvme/connect_stress/connect_stress.o 00:03:54.401 CC test/nvme/reserve/reserve.o 00:03:54.401 CC test/nvme/overhead/overhead.o 00:03:54.401 CC test/nvme/boot_partition/boot_partition.o 00:03:54.401 CC test/nvme/compliance/nvme_compliance.o 00:03:54.401 CC test/blobfs/mkfs/mkfs.o 00:03:54.401 CC test/accel/dif/dif.o 00:03:54.401 CC test/nvme/fused_ordering/fused_ordering.o 00:03:54.401 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:54.401 CC test/nvme/cuse/cuse.o 00:03:54.401 CC test/nvme/fdp/fdp.o 00:03:54.401 CC test/lvol/esnap/esnap.o 00:03:54.401 LINK idxd_perf 00:03:54.401 LINK vhost 00:03:54.660 LINK startup 00:03:54.661 LINK reserve 00:03:54.661 CC examples/nvme/hello_world/hello_world.o 00:03:54.661 CC examples/nvme/abort/abort.o 00:03:54.661 CC examples/nvme/arbitration/arbitration.o 00:03:54.661 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:54.661 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:54.661 CC examples/nvme/reconnect/reconnect.o 00:03:54.661 CC examples/nvme/hotplug/hotplug.o 00:03:54.661 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:54.661 LINK doorbell_aers 00:03:54.661 LINK fused_ordering 00:03:54.661 LINK err_injection 00:03:54.661 LINK boot_partition 00:03:54.661 LINK simple_copy 00:03:54.661 LINK reset 00:03:54.661 LINK connect_stress 00:03:54.661 LINK overhead 00:03:54.661 LINK mkfs 00:03:54.661 LINK nvme_dp 00:03:54.661 LINK aer 00:03:54.919 CC examples/accel/perf/accel_perf.o 00:03:54.919 LINK sgl 00:03:54.919 LINK fdp 00:03:54.919 CC examples/blob/hello_world/hello_blob.o 00:03:54.919 CC examples/blob/cli/blobcli.o 00:03:54.919 LINK nvme_compliance 00:03:54.919 LINK pmr_persistence 00:03:54.919 LINK hotplug 00:03:54.919 LINK hello_world 00:03:54.919 LINK cmb_copy 00:03:55.178 LINK abort 00:03:55.178 LINK dif 00:03:55.178 LINK arbitration 00:03:55.178 LINK reconnect 00:03:55.178 LINK hello_blob 00:03:55.178 LINK nvme_manage 00:03:55.435 LINK accel_perf 00:03:55.435 LINK blobcli 00:03:55.435 CC test/bdev/bdevio/bdevio.o 00:03:55.435 LINK iscsi_fuzz 00:03:55.693 CC examples/bdev/hello_world/hello_bdev.o 00:03:55.693 CC examples/bdev/bdevperf/bdevperf.o 00:03:55.950 LINK bdevio 00:03:55.950 LINK hello_bdev 00:03:55.950 LINK cuse 00:03:56.515 LINK bdevperf 00:03:56.772 CC examples/nvmf/nvmf/nvmf.o 00:03:57.029 LINK nvmf 00:03:59.559 LINK esnap 00:03:59.559 00:03:59.559 real 0m41.349s 00:03:59.559 user 7m24.005s 00:03:59.559 sys 1m48.254s 00:03:59.559 20:09:38 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:59.559 20:09:38 make -- common/autotest_common.sh@10 -- $ set +x 00:03:59.559 ************************************ 00:03:59.559 END TEST make 00:03:59.559 ************************************ 00:03:59.818 20:09:38 -- common/autotest_common.sh@1142 -- $ return 0 00:03:59.818 20:09:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:59.818 20:09:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:59.818 20:09:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:59.818 20:09:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.818 20:09:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:59.818 20:09:38 -- pm/common@44 -- $ pid=3813040 00:03:59.818 20:09:38 -- pm/common@50 -- $ kill -TERM 3813040 00:03:59.818 20:09:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.818 20:09:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:59.818 20:09:38 -- pm/common@44 -- $ pid=3813042 00:03:59.818 20:09:38 -- pm/common@50 -- $ kill -TERM 3813042 00:03:59.818 20:09:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.818 20:09:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:59.819 20:09:38 -- pm/common@44 -- $ pid=3813044 00:03:59.819 20:09:38 -- pm/common@50 -- $ kill -TERM 3813044 00:03:59.819 20:09:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.819 20:09:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:59.819 20:09:38 -- pm/common@44 -- $ pid=3813073 00:03:59.819 20:09:38 -- pm/common@50 -- $ sudo -E kill -TERM 3813073 00:03:59.819 20:09:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:59.819 20:09:38 -- nvmf/common.sh@7 -- # uname -s 00:03:59.819 20:09:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.819 20:09:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.819 20:09:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.819 20:09:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.819 20:09:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.819 20:09:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.819 20:09:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.819 20:09:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.819 20:09:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.819 20:09:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.819 20:09:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:59.819 20:09:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:59.819 20:09:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.819 20:09:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.819 20:09:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:59.819 20:09:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:59.819 20:09:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:59.819 20:09:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.819 20:09:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.819 20:09:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.819 20:09:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.819 20:09:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.819 20:09:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.819 20:09:38 -- paths/export.sh@5 -- # export PATH 00:03:59.819 20:09:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.819 20:09:38 -- nvmf/common.sh@47 -- # : 0 00:03:59.819 20:09:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:59.819 20:09:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:59.819 20:09:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:59.819 20:09:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.819 20:09:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.819 20:09:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:59.819 20:09:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:59.819 20:09:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:59.819 20:09:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:59.819 20:09:38 -- spdk/autotest.sh@32 -- # uname -s 00:03:59.819 20:09:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:59.819 20:09:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:59.819 20:09:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:59.819 20:09:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:59.819 20:09:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:59.819 20:09:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:59.819 20:09:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:59.819 20:09:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:59.819 20:09:38 -- spdk/autotest.sh@48 -- # udevadm_pid=3888533 00:03:59.819 20:09:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:59.819 20:09:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:59.819 20:09:38 -- pm/common@17 -- # local monitor 00:03:59.819 20:09:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.819 20:09:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.819 20:09:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.819 20:09:38 -- pm/common@21 -- # date +%s 00:03:59.819 20:09:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.819 20:09:38 -- pm/common@21 -- # date +%s 00:03:59.819 20:09:38 -- pm/common@25 -- # sleep 1 00:03:59.819 20:09:38 -- pm/common@21 -- # date +%s 00:03:59.819 20:09:38 -- pm/common@21 -- # date +%s 00:03:59.819 20:09:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721066978 00:03:59.819 20:09:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721066978 00:03:59.819 20:09:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721066978 00:03:59.819 20:09:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721066978 00:03:59.819 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721066978_collect-vmstat.pm.log 00:03:59.819 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721066978_collect-cpu-load.pm.log 00:03:59.819 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721066978_collect-cpu-temp.pm.log 00:03:59.819 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721066978_collect-bmc-pm.bmc.pm.log 00:04:00.753 20:09:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:00.753 20:09:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:00.753 20:09:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:00.753 20:09:39 -- common/autotest_common.sh@10 -- # set +x 00:04:00.753 20:09:39 -- spdk/autotest.sh@59 -- # create_test_list 00:04:00.753 20:09:39 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:00.753 20:09:39 -- common/autotest_common.sh@10 -- # set +x 00:04:00.753 20:09:39 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:00.753 20:09:39 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.753 20:09:39 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.753 20:09:39 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:00.753 20:09:39 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.753 20:09:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:00.753 20:09:39 -- common/autotest_common.sh@1455 -- # uname 00:04:00.753 20:09:39 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:00.753 20:09:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:00.753 20:09:39 -- common/autotest_common.sh@1475 -- # uname 00:04:00.753 20:09:39 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:00.753 20:09:39 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:00.753 20:09:39 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:00.753 20:09:39 -- spdk/autotest.sh@72 -- # hash lcov 00:04:00.753 20:09:39 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:00.753 20:09:39 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:00.753 --rc lcov_branch_coverage=1 00:04:00.753 --rc lcov_function_coverage=1 00:04:00.753 --rc genhtml_branch_coverage=1 00:04:00.753 --rc genhtml_function_coverage=1 00:04:00.753 --rc genhtml_legend=1 00:04:00.753 --rc geninfo_all_blocks=1 00:04:00.754 ' 00:04:00.754 20:09:39 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:00.754 --rc lcov_branch_coverage=1 00:04:00.754 --rc lcov_function_coverage=1 00:04:00.754 --rc genhtml_branch_coverage=1 00:04:00.754 --rc genhtml_function_coverage=1 00:04:00.754 --rc genhtml_legend=1 00:04:00.754 --rc geninfo_all_blocks=1 00:04:00.754 ' 00:04:00.754 20:09:39 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:00.754 --rc lcov_branch_coverage=1 00:04:00.754 --rc lcov_function_coverage=1 00:04:00.754 --rc genhtml_branch_coverage=1 00:04:00.754 --rc genhtml_function_coverage=1 00:04:00.754 --rc genhtml_legend=1 00:04:00.754 --rc geninfo_all_blocks=1 00:04:00.754 --no-external' 00:04:00.754 20:09:39 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:00.754 --rc lcov_branch_coverage=1 00:04:00.754 --rc lcov_function_coverage=1 00:04:00.754 --rc genhtml_branch_coverage=1 00:04:00.754 --rc genhtml_function_coverage=1 00:04:00.754 --rc genhtml_legend=1 00:04:00.754 --rc geninfo_all_blocks=1 00:04:00.754 --no-external' 00:04:00.754 20:09:39 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:01.012 lcov: LCOV version 1.14 00:04:01.012 20:09:39 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:19.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:19.084 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:31.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:31.342 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:31.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:31.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:31.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:31.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:31.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:31.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:31.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:31.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:31.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:31.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:31.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:31.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:31.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:31.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:31.344 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:33.871 20:10:11 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:33.871 20:10:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.871 20:10:11 -- common/autotest_common.sh@10 -- # set +x 00:04:33.871 20:10:11 -- spdk/autotest.sh@91 -- # rm -f 00:04:33.871 20:10:11 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:34.806 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:34.806 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:34.806 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:34.806 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:34.806 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:34.806 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:34.806 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:34.806 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:34.806 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:34.806 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:34.806 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:34.806 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:34.806 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:34.806 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:34.806 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:34.806 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:34.806 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:35.064 20:10:13 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:35.064 20:10:13 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:35.064 20:10:13 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:35.064 20:10:13 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:35.064 20:10:13 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:35.064 20:10:13 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:35.064 20:10:13 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:35.064 20:10:13 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.064 20:10:13 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:35.064 20:10:13 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:35.064 20:10:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:35.064 20:10:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:35.064 20:10:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:35.064 20:10:13 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:35.064 20:10:13 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:35.064 No valid GPT data, bailing 00:04:35.064 20:10:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.064 20:10:13 -- scripts/common.sh@391 -- # pt= 00:04:35.064 20:10:13 -- scripts/common.sh@392 -- # return 1 00:04:35.064 20:10:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:35.064 1+0 records in 00:04:35.064 1+0 records out 00:04:35.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00187308 s, 560 MB/s 00:04:35.064 20:10:13 -- spdk/autotest.sh@118 -- # sync 00:04:35.064 20:10:13 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:35.064 20:10:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:35.064 20:10:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:36.962 20:10:15 -- spdk/autotest.sh@124 -- # uname -s 00:04:36.962 20:10:15 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:36.962 20:10:15 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:36.962 20:10:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.962 20:10:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.962 20:10:15 -- common/autotest_common.sh@10 -- # set +x 00:04:36.962 ************************************ 00:04:36.962 START TEST setup.sh 00:04:36.962 ************************************ 00:04:36.962 20:10:15 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:37.221 * Looking for test storage... 00:04:37.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:37.221 20:10:15 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:37.221 20:10:15 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:37.221 20:10:15 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:37.221 20:10:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.221 20:10:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.221 20:10:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.221 ************************************ 00:04:37.221 START TEST acl 00:04:37.221 ************************************ 00:04:37.221 20:10:15 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:37.221 * Looking for test storage... 00:04:37.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:37.221 20:10:15 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:37.221 20:10:15 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:37.221 20:10:15 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:37.221 20:10:15 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:37.221 20:10:15 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.221 20:10:15 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:37.221 20:10:15 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:37.221 20:10:15 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.221 20:10:15 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.221 20:10:15 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:37.221 20:10:15 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:37.221 20:10:15 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:37.221 20:10:15 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:37.221 20:10:15 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:37.221 20:10:15 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.221 20:10:15 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.594 20:10:16 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:38.594 20:10:16 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:38.594 20:10:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.595 20:10:16 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:38.595 20:10:16 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.595 20:10:16 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:39.529 Hugepages 00:04:39.530 node hugesize free / total 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 00:04:39.530 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:39.530 20:10:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.530 20:10:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:39.530 20:10:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:39.530 20:10:18 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:39.530 20:10:18 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:39.530 20:10:18 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:39.530 20:10:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.788 20:10:18 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:39.788 20:10:18 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:39.788 20:10:18 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.788 20:10:18 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.788 20:10:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:39.788 ************************************ 00:04:39.788 START TEST denied 00:04:39.788 ************************************ 00:04:39.788 20:10:18 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:39.788 20:10:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:39.788 20:10:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:39.788 20:10:18 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:39.788 20:10:18 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.788 20:10:18 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:41.160 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:41.160 20:10:19 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:41.160 20:10:19 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:41.160 20:10:19 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:41.160 20:10:19 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:41.160 20:10:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:41.160 20:10:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:41.160 20:10:19 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:41.160 20:10:19 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:41.160 20:10:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.160 20:10:19 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.690 00:04:43.690 real 0m3.718s 00:04:43.690 user 0m1.092s 00:04:43.690 sys 0m1.723s 00:04:43.690 20:10:21 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.690 20:10:21 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:43.690 ************************************ 00:04:43.690 END TEST denied 00:04:43.690 ************************************ 00:04:43.690 20:10:21 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:43.690 20:10:21 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:43.690 20:10:21 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.690 20:10:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.690 20:10:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:43.690 ************************************ 00:04:43.690 START TEST allowed 00:04:43.690 ************************************ 00:04:43.690 20:10:21 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:43.690 20:10:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:43.690 20:10:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:43.690 20:10:21 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:43.690 20:10:21 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.690 20:10:21 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.221 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:46.221 20:10:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:46.221 20:10:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:46.221 20:10:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:46.221 20:10:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:46.221 20:10:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.597 00:04:47.597 real 0m3.920s 00:04:47.597 user 0m1.053s 00:04:47.597 sys 0m1.699s 00:04:47.597 20:10:25 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.597 20:10:25 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:47.597 ************************************ 00:04:47.597 END TEST allowed 00:04:47.597 ************************************ 00:04:47.597 20:10:25 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:47.597 00:04:47.597 real 0m10.274s 00:04:47.597 user 0m3.192s 00:04:47.597 sys 0m5.077s 00:04:47.597 20:10:25 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.597 20:10:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:47.597 ************************************ 00:04:47.597 END TEST acl 00:04:47.597 ************************************ 00:04:47.597 20:10:25 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:47.597 20:10:25 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:47.597 20:10:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.597 20:10:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.597 20:10:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.597 ************************************ 00:04:47.597 START TEST hugepages 00:04:47.597 ************************************ 00:04:47.597 20:10:25 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:47.597 * Looking for test storage... 00:04:47.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:47.597 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:47.597 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:47.597 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:47.597 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:47.597 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:47.597 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:47.597 20:10:25 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:47.597 20:10:25 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:47.597 20:10:25 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:47.597 20:10:25 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:47.597 20:10:25 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.597 20:10:25 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.597 20:10:25 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 41625616 kB' 'MemAvailable: 45133868 kB' 'Buffers: 2704 kB' 'Cached: 12295916 kB' 'SwapCached: 0 kB' 'Active: 9296468 kB' 'Inactive: 3506596 kB' 'Active(anon): 8901364 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507776 kB' 'Mapped: 171556 kB' 'Shmem: 8396920 kB' 'KReclaimable: 200784 kB' 'Slab: 576640 kB' 'SReclaimable: 200784 kB' 'SUnreclaim: 375856 kB' 'KernelStack: 12864 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 10012928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.598 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:47.599 20:10:25 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:47.599 20:10:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.599 20:10:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.599 20:10:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.599 ************************************ 00:04:47.599 START TEST default_setup 00:04:47.599 ************************************ 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:47.599 20:10:25 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.600 20:10:25 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.013 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:49.013 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:49.013 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:49.013 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:49.013 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:49.013 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:49.013 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:49.013 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:49.013 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:49.013 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:49.013 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:49.013 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:49.013 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:49.013 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:49.013 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:49.013 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:49.958 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43775384 kB' 'MemAvailable: 47283640 kB' 'Buffers: 2704 kB' 'Cached: 12296016 kB' 'SwapCached: 0 kB' 'Active: 9315160 kB' 'Inactive: 3506596 kB' 'Active(anon): 8920056 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526184 kB' 'Mapped: 171676 kB' 'Shmem: 8397020 kB' 'KReclaimable: 200792 kB' 'Slab: 576056 kB' 'SReclaimable: 200792 kB' 'SUnreclaim: 375264 kB' 'KernelStack: 12800 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10033944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.958 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43775712 kB' 'MemAvailable: 47283968 kB' 'Buffers: 2704 kB' 'Cached: 12296016 kB' 'SwapCached: 0 kB' 'Active: 9315128 kB' 'Inactive: 3506596 kB' 'Active(anon): 8920024 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526168 kB' 'Mapped: 171616 kB' 'Shmem: 8397020 kB' 'KReclaimable: 200792 kB' 'Slab: 576040 kB' 'SReclaimable: 200792 kB' 'SUnreclaim: 375248 kB' 'KernelStack: 12784 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10033960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.959 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43777464 kB' 'MemAvailable: 47285720 kB' 'Buffers: 2704 kB' 'Cached: 12296036 kB' 'SwapCached: 0 kB' 'Active: 9315028 kB' 'Inactive: 3506596 kB' 'Active(anon): 8919924 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526028 kB' 'Mapped: 171616 kB' 'Shmem: 8397040 kB' 'KReclaimable: 200792 kB' 'Slab: 576108 kB' 'SReclaimable: 200792 kB' 'SUnreclaim: 375316 kB' 'KernelStack: 12832 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10033984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.960 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.961 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:49.962 nr_hugepages=1024 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.962 resv_hugepages=0 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.962 surplus_hugepages=0 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.962 anon_hugepages=0 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43777464 kB' 'MemAvailable: 47285720 kB' 'Buffers: 2704 kB' 'Cached: 12296056 kB' 'SwapCached: 0 kB' 'Active: 9315020 kB' 'Inactive: 3506596 kB' 'Active(anon): 8919916 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526024 kB' 'Mapped: 171616 kB' 'Shmem: 8397060 kB' 'KReclaimable: 200792 kB' 'Slab: 576108 kB' 'SReclaimable: 200792 kB' 'SUnreclaim: 375316 kB' 'KernelStack: 12832 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10034004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.962 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20164912 kB' 'MemUsed: 12712028 kB' 'SwapCached: 0 kB' 'Active: 6396472 kB' 'Inactive: 3263864 kB' 'Active(anon): 6207388 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9355240 kB' 'Mapped: 63432 kB' 'AnonPages: 307808 kB' 'Shmem: 5902292 kB' 'KernelStack: 8328 kB' 'PageTables: 4876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124252 kB' 'Slab: 321316 kB' 'SReclaimable: 124252 kB' 'SUnreclaim: 197064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.963 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:49.964 node0=1024 expecting 1024 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:49.964 00:04:49.964 real 0m2.427s 00:04:49.964 user 0m0.672s 00:04:49.964 sys 0m0.894s 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.964 20:10:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:49.964 ************************************ 00:04:49.964 END TEST default_setup 00:04:49.964 ************************************ 00:04:49.964 20:10:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:49.964 20:10:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:49.964 20:10:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.964 20:10:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.964 20:10:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:49.964 ************************************ 00:04:49.965 START TEST per_node_1G_alloc 00:04:49.965 ************************************ 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.965 20:10:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:51.350 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:51.350 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:51.350 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:51.350 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:51.350 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:51.350 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:51.350 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:51.350 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:51.350 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:51.350 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:51.350 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:51.350 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:51.350 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:51.350 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:51.350 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:51.350 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:51.350 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43767724 kB' 'MemAvailable: 47275980 kB' 'Buffers: 2704 kB' 'Cached: 12296132 kB' 'SwapCached: 0 kB' 'Active: 9318284 kB' 'Inactive: 3506596 kB' 'Active(anon): 8923180 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529200 kB' 'Mapped: 172128 kB' 'Shmem: 8397136 kB' 'KReclaimable: 200792 kB' 'Slab: 575904 kB' 'SReclaimable: 200792 kB' 'SUnreclaim: 375112 kB' 'KernelStack: 12784 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10038188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.350 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.351 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43763444 kB' 'MemAvailable: 47271700 kB' 'Buffers: 2704 kB' 'Cached: 12296132 kB' 'SwapCached: 0 kB' 'Active: 9321128 kB' 'Inactive: 3506596 kB' 'Active(anon): 8926024 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532132 kB' 'Mapped: 172488 kB' 'Shmem: 8397136 kB' 'KReclaimable: 200792 kB' 'Slab: 575896 kB' 'SReclaimable: 200792 kB' 'SUnreclaim: 375104 kB' 'KernelStack: 12848 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10040460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196196 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.352 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.353 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.354 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43763840 kB' 'MemAvailable: 47272096 kB' 'Buffers: 2704 kB' 'Cached: 12296152 kB' 'SwapCached: 0 kB' 'Active: 9315128 kB' 'Inactive: 3506596 kB' 'Active(anon): 8920024 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526528 kB' 'Mapped: 172116 kB' 'Shmem: 8397156 kB' 'KReclaimable: 200792 kB' 'Slab: 575968 kB' 'SReclaimable: 200792 kB' 'SUnreclaim: 375176 kB' 'KernelStack: 12816 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10035456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.355 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.356 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.357 nr_hugepages=1024 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.357 resv_hugepages=0 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.357 surplus_hugepages=0 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.357 anon_hugepages=0 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43764496 kB' 'MemAvailable: 47272752 kB' 'Buffers: 2704 kB' 'Cached: 12296180 kB' 'SwapCached: 0 kB' 'Active: 9318848 kB' 'Inactive: 3506596 kB' 'Active(anon): 8923744 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529792 kB' 'Mapped: 172116 kB' 'Shmem: 8397184 kB' 'KReclaimable: 200792 kB' 'Slab: 575968 kB' 'SReclaimable: 200792 kB' 'SUnreclaim: 375176 kB' 'KernelStack: 12864 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10038648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.357 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.358 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.359 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21209160 kB' 'MemUsed: 11667780 kB' 'SwapCached: 0 kB' 'Active: 6396144 kB' 'Inactive: 3263864 kB' 'Active(anon): 6207060 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9355252 kB' 'Mapped: 64348 kB' 'AnonPages: 307988 kB' 'Shmem: 5902304 kB' 'KernelStack: 8344 kB' 'PageTables: 4840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124252 kB' 'Slab: 321212 kB' 'SReclaimable: 124252 kB' 'SUnreclaim: 196960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.360 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22554852 kB' 'MemUsed: 5109900 kB' 'SwapCached: 0 kB' 'Active: 2919864 kB' 'Inactive: 242732 kB' 'Active(anon): 2713844 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242732 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2943648 kB' 'Mapped: 108204 kB' 'AnonPages: 219092 kB' 'Shmem: 2494896 kB' 'KernelStack: 4536 kB' 'PageTables: 3280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76540 kB' 'Slab: 254756 kB' 'SReclaimable: 76540 kB' 'SUnreclaim: 178216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.362 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:51.363 node0=512 expecting 512 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:51.363 node1=512 expecting 512 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:51.363 00:04:51.363 real 0m1.413s 00:04:51.363 user 0m0.573s 00:04:51.363 sys 0m0.801s 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.363 20:10:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:51.363 ************************************ 00:04:51.363 END TEST per_node_1G_alloc 00:04:51.363 ************************************ 00:04:51.622 20:10:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:51.622 20:10:29 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:51.622 20:10:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.622 20:10:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.622 20:10:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:51.622 ************************************ 00:04:51.622 START TEST even_2G_alloc 00:04:51.622 ************************************ 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.622 20:10:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.556 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.557 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:52.557 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.557 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.557 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.557 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.557 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.557 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.557 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.557 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.557 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.557 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.557 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.557 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.557 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.557 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.557 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43753508 kB' 'MemAvailable: 47261764 kB' 'Buffers: 2704 kB' 'Cached: 12296276 kB' 'SwapCached: 0 kB' 'Active: 9316792 kB' 'Inactive: 3506596 kB' 'Active(anon): 8921688 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527616 kB' 'Mapped: 171684 kB' 'Shmem: 8397280 kB' 'KReclaimable: 200792 kB' 'Slab: 575816 kB' 'SReclaimable: 200792 kB' 'SUnreclaim: 375024 kB' 'KernelStack: 12848 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10034624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.819 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.820 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43753804 kB' 'MemAvailable: 47262060 kB' 'Buffers: 2704 kB' 'Cached: 12296280 kB' 'SwapCached: 0 kB' 'Active: 9316172 kB' 'Inactive: 3506596 kB' 'Active(anon): 8921068 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527052 kB' 'Mapped: 171728 kB' 'Shmem: 8397284 kB' 'KReclaimable: 200792 kB' 'Slab: 575812 kB' 'SReclaimable: 200792 kB' 'SUnreclaim: 375020 kB' 'KernelStack: 12848 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10034640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.821 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.822 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43753804 kB' 'MemAvailable: 47262060 kB' 'Buffers: 2704 kB' 'Cached: 12296280 kB' 'SwapCached: 0 kB' 'Active: 9316100 kB' 'Inactive: 3506596 kB' 'Active(anon): 8920996 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526920 kB' 'Mapped: 171652 kB' 'Shmem: 8397284 kB' 'KReclaimable: 200792 kB' 'Slab: 575868 kB' 'SReclaimable: 200792 kB' 'SUnreclaim: 375076 kB' 'KernelStack: 12864 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10034664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.823 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.824 nr_hugepages=1024 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.824 resv_hugepages=0 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.824 surplus_hugepages=0 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.824 anon_hugepages=0 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.824 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43755600 kB' 'MemAvailable: 47263856 kB' 'Buffers: 2704 kB' 'Cached: 12296316 kB' 'SwapCached: 0 kB' 'Active: 9315764 kB' 'Inactive: 3506596 kB' 'Active(anon): 8920660 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526540 kB' 'Mapped: 171652 kB' 'Shmem: 8397320 kB' 'KReclaimable: 200792 kB' 'Slab: 575868 kB' 'SReclaimable: 200792 kB' 'SUnreclaim: 375076 kB' 'KernelStack: 12848 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10034684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.825 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21213496 kB' 'MemUsed: 11663444 kB' 'SwapCached: 0 kB' 'Active: 6395980 kB' 'Inactive: 3263864 kB' 'Active(anon): 6206896 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9355260 kB' 'Mapped: 63432 kB' 'AnonPages: 307692 kB' 'Shmem: 5902312 kB' 'KernelStack: 8296 kB' 'PageTables: 4708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124244 kB' 'Slab: 321112 kB' 'SReclaimable: 124244 kB' 'SUnreclaim: 196868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.826 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.827 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22542308 kB' 'MemUsed: 5122444 kB' 'SwapCached: 0 kB' 'Active: 2919952 kB' 'Inactive: 242732 kB' 'Active(anon): 2713932 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242732 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2943800 kB' 'Mapped: 108220 kB' 'AnonPages: 218972 kB' 'Shmem: 2495048 kB' 'KernelStack: 4536 kB' 'PageTables: 3240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76548 kB' 'Slab: 254756 kB' 'SReclaimable: 76548 kB' 'SUnreclaim: 178208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.828 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:52.829 node0=512 expecting 512 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:52.829 node1=512 expecting 512 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:52.829 00:04:52.829 real 0m1.420s 00:04:52.829 user 0m0.621s 00:04:52.829 sys 0m0.759s 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.829 20:10:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.829 ************************************ 00:04:52.829 END TEST even_2G_alloc 00:04:52.829 ************************************ 00:04:53.086 20:10:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:53.086 20:10:31 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:53.086 20:10:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.086 20:10:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.086 20:10:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:53.086 ************************************ 00:04:53.086 START TEST odd_alloc 00:04:53.086 ************************************ 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.086 20:10:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.017 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.017 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:54.017 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.017 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.017 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.017 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.017 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.017 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.017 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.017 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.017 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.017 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.017 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.017 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.017 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.017 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.017 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43752604 kB' 'MemAvailable: 47260848 kB' 'Buffers: 2704 kB' 'Cached: 12296404 kB' 'SwapCached: 0 kB' 'Active: 9312700 kB' 'Inactive: 3506596 kB' 'Active(anon): 8917596 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523352 kB' 'Mapped: 170744 kB' 'Shmem: 8397408 kB' 'KReclaimable: 200768 kB' 'Slab: 575892 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375124 kB' 'KernelStack: 12752 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10020408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43751364 kB' 'MemAvailable: 47259608 kB' 'Buffers: 2704 kB' 'Cached: 12296408 kB' 'SwapCached: 0 kB' 'Active: 9312484 kB' 'Inactive: 3506596 kB' 'Active(anon): 8917380 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523256 kB' 'Mapped: 170716 kB' 'Shmem: 8397412 kB' 'KReclaimable: 200768 kB' 'Slab: 575884 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375116 kB' 'KernelStack: 12848 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10020616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43750364 kB' 'MemAvailable: 47258608 kB' 'Buffers: 2704 kB' 'Cached: 12296424 kB' 'SwapCached: 0 kB' 'Active: 9314176 kB' 'Inactive: 3506596 kB' 'Active(anon): 8919072 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524892 kB' 'Mapped: 170716 kB' 'Shmem: 8397428 kB' 'KReclaimable: 200768 kB' 'Slab: 575908 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375140 kB' 'KernelStack: 13104 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10027392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196320 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.285 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.286 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:54.287 nr_hugepages=1025 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.287 resv_hugepages=0 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.287 surplus_hugepages=0 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.287 anon_hugepages=0 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43748720 kB' 'MemAvailable: 47256964 kB' 'Buffers: 2704 kB' 'Cached: 12296444 kB' 'SwapCached: 0 kB' 'Active: 9314304 kB' 'Inactive: 3506596 kB' 'Active(anon): 8919200 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524980 kB' 'Mapped: 170664 kB' 'Shmem: 8397448 kB' 'KReclaimable: 200768 kB' 'Slab: 575908 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375140 kB' 'KernelStack: 13216 kB' 'PageTables: 9588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10020100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196320 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.287 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.288 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21198636 kB' 'MemUsed: 11678304 kB' 'SwapCached: 0 kB' 'Active: 6394428 kB' 'Inactive: 3263864 kB' 'Active(anon): 6205344 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9355260 kB' 'Mapped: 62644 kB' 'AnonPages: 306144 kB' 'Shmem: 5902312 kB' 'KernelStack: 8616 kB' 'PageTables: 6300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124220 kB' 'Slab: 321076 kB' 'SReclaimable: 124220 kB' 'SUnreclaim: 196856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.289 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22549048 kB' 'MemUsed: 5115704 kB' 'SwapCached: 0 kB' 'Active: 2920532 kB' 'Inactive: 242732 kB' 'Active(anon): 2714512 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242732 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2943912 kB' 'Mapped: 108012 kB' 'AnonPages: 219068 kB' 'Shmem: 2495160 kB' 'KernelStack: 4472 kB' 'PageTables: 3024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76548 kB' 'Slab: 254832 kB' 'SReclaimable: 76548 kB' 'SUnreclaim: 178284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.290 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:54.291 node0=512 expecting 513 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:54.291 node1=513 expecting 512 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:54.291 00:04:54.291 real 0m1.406s 00:04:54.291 user 0m0.552s 00:04:54.291 sys 0m0.815s 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.291 20:10:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:54.291 ************************************ 00:04:54.291 END TEST odd_alloc 00:04:54.291 ************************************ 00:04:54.550 20:10:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:54.550 20:10:32 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:54.550 20:10:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.550 20:10:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.550 20:10:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.550 ************************************ 00:04:54.550 START TEST custom_alloc 00:04:54.550 ************************************ 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:54.550 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.551 20:10:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.485 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.485 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:55.485 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.485 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.485 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.485 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.485 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.485 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.485 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.485 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.485 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.485 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.485 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.485 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.485 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.485 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.485 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42720264 kB' 'MemAvailable: 46228508 kB' 'Buffers: 2704 kB' 'Cached: 12296540 kB' 'SwapCached: 0 kB' 'Active: 9312116 kB' 'Inactive: 3506596 kB' 'Active(anon): 8917012 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522676 kB' 'Mapped: 170732 kB' 'Shmem: 8397544 kB' 'KReclaimable: 200768 kB' 'Slab: 575968 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375200 kB' 'KernelStack: 12800 kB' 'PageTables: 7600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10019672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.749 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.750 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42721136 kB' 'MemAvailable: 46229380 kB' 'Buffers: 2704 kB' 'Cached: 12296540 kB' 'SwapCached: 0 kB' 'Active: 9312480 kB' 'Inactive: 3506596 kB' 'Active(anon): 8917376 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523092 kB' 'Mapped: 170732 kB' 'Shmem: 8397544 kB' 'KReclaimable: 200768 kB' 'Slab: 575968 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375200 kB' 'KernelStack: 12800 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10019688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.751 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42721476 kB' 'MemAvailable: 46229720 kB' 'Buffers: 2704 kB' 'Cached: 12296560 kB' 'SwapCached: 0 kB' 'Active: 9312384 kB' 'Inactive: 3506596 kB' 'Active(anon): 8917280 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522972 kB' 'Mapped: 170732 kB' 'Shmem: 8397564 kB' 'KReclaimable: 200768 kB' 'Slab: 575928 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375160 kB' 'KernelStack: 12800 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10019712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.752 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.753 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:55.754 nr_hugepages=1536 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.754 resv_hugepages=0 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.754 surplus_hugepages=0 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.754 anon_hugepages=0 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.754 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42721388 kB' 'MemAvailable: 46229632 kB' 'Buffers: 2704 kB' 'Cached: 12296576 kB' 'SwapCached: 0 kB' 'Active: 9312332 kB' 'Inactive: 3506596 kB' 'Active(anon): 8917228 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522896 kB' 'Mapped: 170732 kB' 'Shmem: 8397580 kB' 'KReclaimable: 200768 kB' 'Slab: 575928 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375160 kB' 'KernelStack: 12800 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10019732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.755 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21212760 kB' 'MemUsed: 11664180 kB' 'SwapCached: 0 kB' 'Active: 6392096 kB' 'Inactive: 3263864 kB' 'Active(anon): 6203012 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9355268 kB' 'Mapped: 62704 kB' 'AnonPages: 303784 kB' 'Shmem: 5902320 kB' 'KernelStack: 8296 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124220 kB' 'Slab: 321112 kB' 'SReclaimable: 124220 kB' 'SUnreclaim: 196892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.756 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.757 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 21509192 kB' 'MemUsed: 6155560 kB' 'SwapCached: 0 kB' 'Active: 2920408 kB' 'Inactive: 242732 kB' 'Active(anon): 2714388 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242732 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2944060 kB' 'Mapped: 108028 kB' 'AnonPages: 219244 kB' 'Shmem: 2495308 kB' 'KernelStack: 4504 kB' 'PageTables: 3080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76548 kB' 'Slab: 254816 kB' 'SReclaimable: 76548 kB' 'SUnreclaim: 178268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.758 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:55.759 node0=512 expecting 512 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:55.759 node1=1024 expecting 1024 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:55.759 00:04:55.759 real 0m1.392s 00:04:55.759 user 0m0.598s 00:04:55.759 sys 0m0.757s 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.759 20:10:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:55.759 ************************************ 00:04:55.759 END TEST custom_alloc 00:04:55.759 ************************************ 00:04:55.759 20:10:34 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:55.759 20:10:34 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:55.759 20:10:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.759 20:10:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.759 20:10:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:56.018 ************************************ 00:04:56.018 START TEST no_shrink_alloc 00:04:56.018 ************************************ 00:04:56.018 20:10:34 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:56.018 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:56.018 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.018 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:56.018 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:56.018 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:56.018 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:56.018 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.018 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.018 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:56.018 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:56.018 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.019 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.019 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.019 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.019 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.019 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:56.019 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:56.019 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:56.019 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:56.019 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:56.019 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.019 20:10:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.953 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.953 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.953 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.953 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.953 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.953 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.953 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.953 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.953 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.953 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.953 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.953 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.953 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.953 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.953 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.953 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.953 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43735080 kB' 'MemAvailable: 47243324 kB' 'Buffers: 2704 kB' 'Cached: 12296664 kB' 'SwapCached: 0 kB' 'Active: 9314136 kB' 'Inactive: 3506596 kB' 'Active(anon): 8919032 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524548 kB' 'Mapped: 171256 kB' 'Shmem: 8397668 kB' 'KReclaimable: 200768 kB' 'Slab: 575776 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375008 kB' 'KernelStack: 12800 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10022304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.219 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43732528 kB' 'MemAvailable: 47240772 kB' 'Buffers: 2704 kB' 'Cached: 12296668 kB' 'SwapCached: 0 kB' 'Active: 9316948 kB' 'Inactive: 3506596 kB' 'Active(anon): 8921844 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527400 kB' 'Mapped: 171180 kB' 'Shmem: 8397672 kB' 'KReclaimable: 200768 kB' 'Slab: 575768 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375000 kB' 'KernelStack: 12800 kB' 'PageTables: 7560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10025356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.220 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.221 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43732692 kB' 'MemAvailable: 47240936 kB' 'Buffers: 2704 kB' 'Cached: 12296684 kB' 'SwapCached: 0 kB' 'Active: 9318688 kB' 'Inactive: 3506596 kB' 'Active(anon): 8923584 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529140 kB' 'Mapped: 171660 kB' 'Shmem: 8397688 kB' 'KReclaimable: 200768 kB' 'Slab: 575828 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375060 kB' 'KernelStack: 12864 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10026448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196068 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.222 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.223 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:57.224 nr_hugepages=1024 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.224 resv_hugepages=0 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.224 surplus_hugepages=0 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.224 anon_hugepages=0 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43738868 kB' 'MemAvailable: 47247112 kB' 'Buffers: 2704 kB' 'Cached: 12296704 kB' 'SwapCached: 0 kB' 'Active: 9312460 kB' 'Inactive: 3506596 kB' 'Active(anon): 8917356 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522832 kB' 'Mapped: 170744 kB' 'Shmem: 8397708 kB' 'KReclaimable: 200768 kB' 'Slab: 575828 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375060 kB' 'KernelStack: 12816 kB' 'PageTables: 7572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10020348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.224 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.225 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20150336 kB' 'MemUsed: 12726604 kB' 'SwapCached: 0 kB' 'Active: 6392344 kB' 'Inactive: 3263864 kB' 'Active(anon): 6203260 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9355272 kB' 'Mapped: 62704 kB' 'AnonPages: 304108 kB' 'Shmem: 5902324 kB' 'KernelStack: 8360 kB' 'PageTables: 4624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124220 kB' 'Slab: 321004 kB' 'SReclaimable: 124220 kB' 'SUnreclaim: 196784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.226 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.227 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:57.228 node0=1024 expecting 1024 00:04:57.228 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:57.228 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:57.228 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:57.228 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:57.228 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.228 20:10:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.601 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:58.601 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:58.601 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:58.601 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:58.601 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:58.601 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:58.601 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:58.601 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:58.601 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.601 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:58.601 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:58.601 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:58.601 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:58.601 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:58.601 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:58.601 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:58.601 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.601 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.601 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43723368 kB' 'MemAvailable: 47231612 kB' 'Buffers: 2704 kB' 'Cached: 12296772 kB' 'SwapCached: 0 kB' 'Active: 9313092 kB' 'Inactive: 3506596 kB' 'Active(anon): 8917988 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523420 kB' 'Mapped: 170752 kB' 'Shmem: 8397776 kB' 'KReclaimable: 200768 kB' 'Slab: 575980 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375212 kB' 'KernelStack: 12832 kB' 'PageTables: 7508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10020392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.602 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.603 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43723696 kB' 'MemAvailable: 47231940 kB' 'Buffers: 2704 kB' 'Cached: 12296776 kB' 'SwapCached: 0 kB' 'Active: 9313012 kB' 'Inactive: 3506596 kB' 'Active(anon): 8917908 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523400 kB' 'Mapped: 170752 kB' 'Shmem: 8397780 kB' 'KReclaimable: 200768 kB' 'Slab: 575980 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375212 kB' 'KernelStack: 12896 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10020660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.604 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.605 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.606 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43724564 kB' 'MemAvailable: 47232808 kB' 'Buffers: 2704 kB' 'Cached: 12296792 kB' 'SwapCached: 0 kB' 'Active: 9312848 kB' 'Inactive: 3506596 kB' 'Active(anon): 8917744 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523308 kB' 'Mapped: 170812 kB' 'Shmem: 8397796 kB' 'KReclaimable: 200768 kB' 'Slab: 576056 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375288 kB' 'KernelStack: 12896 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10020432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.607 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.608 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:58.609 nr_hugepages=1024 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.609 resv_hugepages=0 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.609 surplus_hugepages=0 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.609 anon_hugepages=0 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.609 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43724564 kB' 'MemAvailable: 47232808 kB' 'Buffers: 2704 kB' 'Cached: 12296828 kB' 'SwapCached: 0 kB' 'Active: 9312888 kB' 'Inactive: 3506596 kB' 'Active(anon): 8917784 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523284 kB' 'Mapped: 170752 kB' 'Shmem: 8397832 kB' 'KReclaimable: 200768 kB' 'Slab: 576056 kB' 'SReclaimable: 200768 kB' 'SUnreclaim: 375288 kB' 'KernelStack: 12896 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10020456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1824348 kB' 'DirectMap2M: 13824000 kB' 'DirectMap1G: 53477376 kB' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.610 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20126080 kB' 'MemUsed: 12750860 kB' 'SwapCached: 0 kB' 'Active: 6392060 kB' 'Inactive: 3263864 kB' 'Active(anon): 6202976 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9355272 kB' 'Mapped: 62704 kB' 'AnonPages: 303828 kB' 'Shmem: 5902324 kB' 'KernelStack: 8376 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124220 kB' 'Slab: 321128 kB' 'SReclaimable: 124220 kB' 'SUnreclaim: 196908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.611 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.612 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.613 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.613 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.613 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:58.613 node0=1024 expecting 1024 00:04:58.613 20:10:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:58.613 00:04:58.613 real 0m2.730s 00:04:58.613 user 0m1.141s 00:04:58.613 sys 0m1.508s 00:04:58.613 20:10:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.613 20:10:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:58.613 ************************************ 00:04:58.613 END TEST no_shrink_alloc 00:04:58.613 ************************************ 00:04:58.613 20:10:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:58.613 20:10:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:58.613 00:04:58.613 real 0m11.186s 00:04:58.613 user 0m4.338s 00:04:58.613 sys 0m5.775s 00:04:58.613 20:10:37 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.613 20:10:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:58.613 ************************************ 00:04:58.613 END TEST hugepages 00:04:58.613 ************************************ 00:04:58.613 20:10:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:58.613 20:10:37 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:58.613 20:10:37 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.613 20:10:37 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.613 20:10:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:58.613 ************************************ 00:04:58.613 START TEST driver 00:04:58.613 ************************************ 00:04:58.613 20:10:37 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:58.613 * Looking for test storage... 00:04:58.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:58.870 20:10:37 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:58.870 20:10:37 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.870 20:10:37 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:01.417 20:10:39 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:01.417 20:10:39 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.417 20:10:39 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.417 20:10:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:01.417 ************************************ 00:05:01.417 START TEST guess_driver 00:05:01.417 ************************************ 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:01.417 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:01.418 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:01.418 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:01.418 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:01.418 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:01.418 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:01.418 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:01.418 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:01.418 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:01.418 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:01.418 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:01.418 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:01.418 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:01.418 Looking for driver=vfio-pci 00:05:01.418 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.418 20:10:39 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:01.418 20:10:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.418 20:10:39 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.353 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.612 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.612 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.612 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.612 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.612 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.612 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.612 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.612 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.612 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.613 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.613 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.613 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.613 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.613 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.613 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.613 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.613 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.613 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.613 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.613 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.613 20:10:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.549 20:10:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.549 20:10:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.549 20:10:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.549 20:10:41 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:03.549 20:10:41 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:03.549 20:10:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.549 20:10:41 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:06.156 00:05:06.156 real 0m4.853s 00:05:06.156 user 0m1.104s 00:05:06.156 sys 0m1.886s 00:05:06.156 20:10:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.156 20:10:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:06.156 ************************************ 00:05:06.156 END TEST guess_driver 00:05:06.156 ************************************ 00:05:06.156 20:10:44 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:06.156 00:05:06.156 real 0m7.363s 00:05:06.156 user 0m1.659s 00:05:06.156 sys 0m2.861s 00:05:06.156 20:10:44 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.156 20:10:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:06.156 ************************************ 00:05:06.156 END TEST driver 00:05:06.156 ************************************ 00:05:06.156 20:10:44 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:06.156 20:10:44 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:06.156 20:10:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.156 20:10:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.156 20:10:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:06.156 ************************************ 00:05:06.156 START TEST devices 00:05:06.156 ************************************ 00:05:06.156 20:10:44 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:06.156 * Looking for test storage... 00:05:06.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:06.156 20:10:44 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:06.156 20:10:44 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:06.156 20:10:44 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.156 20:10:44 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.533 20:10:45 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:07.533 20:10:45 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:07.533 20:10:45 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:07.533 20:10:45 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:07.533 20:10:45 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.533 20:10:45 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:07.533 20:10:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:07.533 20:10:45 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:07.533 20:10:45 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.533 20:10:45 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:07.533 20:10:45 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:07.533 20:10:45 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:07.533 20:10:45 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:07.533 20:10:45 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:07.533 20:10:45 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:07.533 20:10:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:07.533 20:10:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:07.533 20:10:45 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:07.533 20:10:45 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:07.533 20:10:45 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:07.533 20:10:45 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:07.533 20:10:45 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:07.533 No valid GPT data, bailing 00:05:07.533 20:10:46 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:07.533 20:10:46 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:07.533 20:10:46 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:07.533 20:10:46 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:07.533 20:10:46 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:07.533 20:10:46 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:07.533 20:10:46 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:07.533 20:10:46 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:07.533 20:10:46 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:07.533 20:10:46 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:07.533 20:10:46 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:07.533 20:10:46 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:07.533 20:10:46 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:07.533 20:10:46 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.533 20:10:46 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.533 20:10:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:07.792 ************************************ 00:05:07.792 START TEST nvme_mount 00:05:07.792 ************************************ 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:07.792 20:10:46 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:08.732 Creating new GPT entries in memory. 00:05:08.732 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:08.732 other utilities. 00:05:08.732 20:10:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:08.732 20:10:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.732 20:10:47 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:08.732 20:10:47 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:08.732 20:10:47 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:09.668 Creating new GPT entries in memory. 00:05:09.668 The operation has completed successfully. 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3908613 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.668 20:10:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:11.046 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:11.046 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:11.305 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:11.305 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:11.305 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:11.305 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.305 20:10:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.680 20:10:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.616 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.876 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.876 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:13.876 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:13.876 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:13.876 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.876 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.876 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.876 20:10:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:13.876 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:13.876 00:05:13.876 real 0m6.151s 00:05:13.876 user 0m1.404s 00:05:13.876 sys 0m2.347s 00:05:13.876 20:10:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.876 20:10:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:13.876 ************************************ 00:05:13.876 END TEST nvme_mount 00:05:13.876 ************************************ 00:05:13.876 20:10:52 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:13.876 20:10:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:13.876 20:10:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.876 20:10:52 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.876 20:10:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:13.876 ************************************ 00:05:13.876 START TEST dm_mount 00:05:13.876 ************************************ 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:13.876 20:10:52 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:14.816 Creating new GPT entries in memory. 00:05:14.816 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:14.816 other utilities. 00:05:14.816 20:10:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:14.816 20:10:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:14.816 20:10:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:14.816 20:10:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:14.816 20:10:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:16.193 Creating new GPT entries in memory. 00:05:16.193 The operation has completed successfully. 00:05:16.193 20:10:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:16.193 20:10:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.193 20:10:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:16.193 20:10:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:16.193 20:10:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:17.130 The operation has completed successfully. 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3910994 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:17.130 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.131 20:10:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.065 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.324 20:10:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:19.699 20:10:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:19.699 20:10:58 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:19.699 20:10:58 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:19.699 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:19.699 20:10:58 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:19.699 20:10:58 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:19.699 00:05:19.699 real 0m5.776s 00:05:19.699 user 0m0.980s 00:05:19.699 sys 0m1.645s 00:05:19.699 20:10:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.699 20:10:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:19.699 ************************************ 00:05:19.699 END TEST dm_mount 00:05:19.699 ************************************ 00:05:19.699 20:10:58 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:19.699 20:10:58 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:19.700 20:10:58 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:19.700 20:10:58 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.700 20:10:58 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:19.700 20:10:58 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:19.700 20:10:58 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:19.700 20:10:58 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:19.957 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:19.957 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:19.957 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:19.957 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:19.957 20:10:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:19.957 20:10:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.957 20:10:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:19.957 20:10:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:19.957 20:10:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:19.957 20:10:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:19.957 20:10:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:19.957 00:05:19.957 real 0m13.862s 00:05:19.957 user 0m3.045s 00:05:19.957 sys 0m5.030s 00:05:19.957 20:10:58 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.957 20:10:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:19.957 ************************************ 00:05:19.957 END TEST devices 00:05:19.957 ************************************ 00:05:19.957 20:10:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:19.957 00:05:19.957 real 0m42.921s 00:05:19.957 user 0m12.322s 00:05:19.957 sys 0m18.906s 00:05:19.957 20:10:58 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.957 20:10:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:19.957 ************************************ 00:05:19.957 END TEST setup.sh 00:05:19.957 ************************************ 00:05:19.957 20:10:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:19.957 20:10:58 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:21.332 Hugepages 00:05:21.332 node hugesize free / total 00:05:21.332 node0 1048576kB 0 / 0 00:05:21.332 node0 2048kB 2048 / 2048 00:05:21.332 node1 1048576kB 0 / 0 00:05:21.332 node1 2048kB 0 / 0 00:05:21.332 00:05:21.332 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:21.332 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:21.332 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:21.332 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:21.332 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:21.332 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:21.332 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:21.332 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:21.332 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:21.332 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:21.332 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:21.332 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:21.332 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:21.332 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:21.332 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:21.332 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:21.332 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:21.332 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:21.332 20:10:59 -- spdk/autotest.sh@130 -- # uname -s 00:05:21.332 20:10:59 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:21.332 20:10:59 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:21.332 20:10:59 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:22.703 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:22.703 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:22.703 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:22.703 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:22.703 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:22.703 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:22.703 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:22.703 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:22.703 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:22.703 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:22.703 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:22.703 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:22.703 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:22.703 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:22.703 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:22.703 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:23.667 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:23.667 20:11:02 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:24.601 20:11:03 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:24.601 20:11:03 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:24.601 20:11:03 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:24.601 20:11:03 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:24.601 20:11:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:24.601 20:11:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:24.601 20:11:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.601 20:11:03 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:24.601 20:11:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:24.601 20:11:03 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:24.602 20:11:03 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:24.602 20:11:03 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:25.974 Waiting for block devices as requested 00:05:25.974 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:25.974 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:25.974 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:26.233 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:26.233 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:26.233 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:26.233 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:26.491 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:26.491 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:26.491 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:26.491 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:26.749 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:26.749 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:26.750 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:26.750 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:27.009 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:27.009 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:27.009 20:11:05 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:27.009 20:11:05 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:27.009 20:11:05 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:27.009 20:11:05 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:27.009 20:11:05 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:27.009 20:11:05 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:27.009 20:11:05 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:27.009 20:11:05 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:27.009 20:11:05 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:27.009 20:11:05 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:27.009 20:11:05 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:27.009 20:11:05 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:27.009 20:11:05 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:27.009 20:11:05 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:27.009 20:11:05 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:27.009 20:11:05 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:27.009 20:11:05 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:27.009 20:11:05 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:27.009 20:11:05 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:27.009 20:11:05 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:27.009 20:11:05 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:27.009 20:11:05 -- common/autotest_common.sh@1557 -- # continue 00:05:27.009 20:11:05 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:27.009 20:11:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.009 20:11:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.009 20:11:05 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:27.009 20:11:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.009 20:11:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.009 20:11:05 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:28.389 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:28.389 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:28.389 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:28.389 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:28.389 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:28.389 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:28.389 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:28.389 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:28.389 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:28.389 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:28.389 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:28.389 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:28.389 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:28.389 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:28.389 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:28.389 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:29.328 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:29.588 20:11:07 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:29.588 20:11:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.588 20:11:07 -- common/autotest_common.sh@10 -- # set +x 00:05:29.588 20:11:07 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:29.588 20:11:07 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:29.588 20:11:07 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:29.588 20:11:07 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:29.588 20:11:07 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:29.588 20:11:07 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:29.588 20:11:07 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:29.588 20:11:07 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:29.588 20:11:07 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.588 20:11:07 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:29.588 20:11:07 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:29.588 20:11:07 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:29.588 20:11:07 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:29.588 20:11:07 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:29.588 20:11:07 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:29.588 20:11:07 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:29.588 20:11:07 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:29.588 20:11:07 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:29.588 20:11:07 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:29.588 20:11:07 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:29.588 20:11:07 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3916172 00:05:29.588 20:11:07 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.588 20:11:07 -- common/autotest_common.sh@1598 -- # waitforlisten 3916172 00:05:29.588 20:11:07 -- common/autotest_common.sh@829 -- # '[' -z 3916172 ']' 00:05:29.588 20:11:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.588 20:11:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.588 20:11:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.588 20:11:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.588 20:11:07 -- common/autotest_common.sh@10 -- # set +x 00:05:29.588 [2024-07-15 20:11:08.028791] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:05:29.588 [2024-07-15 20:11:08.028899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916172 ] 00:05:29.588 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.588 [2024-07-15 20:11:08.091772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.869 [2024-07-15 20:11:08.183686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.127 20:11:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.127 20:11:08 -- common/autotest_common.sh@862 -- # return 0 00:05:30.127 20:11:08 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:30.127 20:11:08 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:30.127 20:11:08 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:33.411 nvme0n1 00:05:33.411 20:11:11 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:33.411 [2024-07-15 20:11:11.736166] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:33.411 [2024-07-15 20:11:11.736231] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:33.411 request: 00:05:33.411 { 00:05:33.411 "nvme_ctrlr_name": "nvme0", 00:05:33.411 "password": "test", 00:05:33.411 "method": "bdev_nvme_opal_revert", 00:05:33.411 "req_id": 1 00:05:33.411 } 00:05:33.411 Got JSON-RPC error response 00:05:33.411 response: 00:05:33.411 { 00:05:33.411 "code": -32603, 00:05:33.411 "message": "Internal error" 00:05:33.411 } 00:05:33.411 20:11:11 -- common/autotest_common.sh@1604 -- # true 00:05:33.411 20:11:11 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:33.411 20:11:11 -- common/autotest_common.sh@1608 -- # killprocess 3916172 00:05:33.411 20:11:11 -- common/autotest_common.sh@948 -- # '[' -z 3916172 ']' 00:05:33.411 20:11:11 -- common/autotest_common.sh@952 -- # kill -0 3916172 00:05:33.411 20:11:11 -- common/autotest_common.sh@953 -- # uname 00:05:33.411 20:11:11 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.411 20:11:11 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3916172 00:05:33.411 20:11:11 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.411 20:11:11 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.411 20:11:11 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3916172' 00:05:33.411 killing process with pid 3916172 00:05:33.411 20:11:11 -- common/autotest_common.sh@967 -- # kill 3916172 00:05:33.411 20:11:11 -- common/autotest_common.sh@972 -- # wait 3916172 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.411 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:33.412 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:35.309 20:11:13 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:35.309 20:11:13 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:35.309 20:11:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:35.309 20:11:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:35.309 20:11:13 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:35.309 20:11:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.309 20:11:13 -- common/autotest_common.sh@10 -- # set +x 00:05:35.309 20:11:13 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:35.309 20:11:13 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:35.309 20:11:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.309 20:11:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.309 20:11:13 -- common/autotest_common.sh@10 -- # set +x 00:05:35.309 ************************************ 00:05:35.309 START TEST env 00:05:35.309 ************************************ 00:05:35.309 20:11:13 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:35.309 * Looking for test storage... 00:05:35.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:35.309 20:11:13 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:35.309 20:11:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.309 20:11:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.309 20:11:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.309 ************************************ 00:05:35.309 START TEST env_memory 00:05:35.309 ************************************ 00:05:35.309 20:11:13 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:35.309 00:05:35.309 00:05:35.309 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.309 http://cunit.sourceforge.net/ 00:05:35.309 00:05:35.309 00:05:35.309 Suite: memory 00:05:35.309 Test: alloc and free memory map ...[2024-07-15 20:11:13.666861] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:35.309 passed 00:05:35.309 Test: mem map translation ...[2024-07-15 20:11:13.686869] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:35.309 [2024-07-15 20:11:13.686893] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:35.309 [2024-07-15 20:11:13.686945] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:35.309 [2024-07-15 20:11:13.686956] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:35.309 passed 00:05:35.309 Test: mem map registration ...[2024-07-15 20:11:13.727341] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:35.309 [2024-07-15 20:11:13.727361] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:35.309 passed 00:05:35.309 Test: mem map adjacent registrations ...passed 00:05:35.309 00:05:35.309 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.309 suites 1 1 n/a 0 0 00:05:35.309 tests 4 4 4 0 0 00:05:35.309 asserts 152 152 152 0 n/a 00:05:35.309 00:05:35.309 Elapsed time = 0.140 seconds 00:05:35.309 00:05:35.309 real 0m0.148s 00:05:35.309 user 0m0.136s 00:05:35.309 sys 0m0.012s 00:05:35.309 20:11:13 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.309 20:11:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:35.309 ************************************ 00:05:35.309 END TEST env_memory 00:05:35.309 ************************************ 00:05:35.310 20:11:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:35.310 20:11:13 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.310 20:11:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.310 20:11:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.310 20:11:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.310 ************************************ 00:05:35.310 START TEST env_vtophys 00:05:35.310 ************************************ 00:05:35.310 20:11:13 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.310 EAL: lib.eal log level changed from notice to debug 00:05:35.310 EAL: Detected lcore 0 as core 0 on socket 0 00:05:35.310 EAL: Detected lcore 1 as core 1 on socket 0 00:05:35.310 EAL: Detected lcore 2 as core 2 on socket 0 00:05:35.310 EAL: Detected lcore 3 as core 3 on socket 0 00:05:35.310 EAL: Detected lcore 4 as core 4 on socket 0 00:05:35.310 EAL: Detected lcore 5 as core 5 on socket 0 00:05:35.310 EAL: Detected lcore 6 as core 8 on socket 0 00:05:35.310 EAL: Detected lcore 7 as core 9 on socket 0 00:05:35.310 EAL: Detected lcore 8 as core 10 on socket 0 00:05:35.310 EAL: Detected lcore 9 as core 11 on socket 0 00:05:35.310 EAL: Detected lcore 10 as core 12 on socket 0 00:05:35.310 EAL: Detected lcore 11 as core 13 on socket 0 00:05:35.310 EAL: Detected lcore 12 as core 0 on socket 1 00:05:35.310 EAL: Detected lcore 13 as core 1 on socket 1 00:05:35.310 EAL: Detected lcore 14 as core 2 on socket 1 00:05:35.310 EAL: Detected lcore 15 as core 3 on socket 1 00:05:35.310 EAL: Detected lcore 16 as core 4 on socket 1 00:05:35.310 EAL: Detected lcore 17 as core 5 on socket 1 00:05:35.310 EAL: Detected lcore 18 as core 8 on socket 1 00:05:35.310 EAL: Detected lcore 19 as core 9 on socket 1 00:05:35.310 EAL: Detected lcore 20 as core 10 on socket 1 00:05:35.310 EAL: Detected lcore 21 as core 11 on socket 1 00:05:35.310 EAL: Detected lcore 22 as core 12 on socket 1 00:05:35.310 EAL: Detected lcore 23 as core 13 on socket 1 00:05:35.310 EAL: Detected lcore 24 as core 0 on socket 0 00:05:35.310 EAL: Detected lcore 25 as core 1 on socket 0 00:05:35.310 EAL: Detected lcore 26 as core 2 on socket 0 00:05:35.310 EAL: Detected lcore 27 as core 3 on socket 0 00:05:35.310 EAL: Detected lcore 28 as core 4 on socket 0 00:05:35.310 EAL: Detected lcore 29 as core 5 on socket 0 00:05:35.310 EAL: Detected lcore 30 as core 8 on socket 0 00:05:35.310 EAL: Detected lcore 31 as core 9 on socket 0 00:05:35.310 EAL: Detected lcore 32 as core 10 on socket 0 00:05:35.310 EAL: Detected lcore 33 as core 11 on socket 0 00:05:35.310 EAL: Detected lcore 34 as core 12 on socket 0 00:05:35.310 EAL: Detected lcore 35 as core 13 on socket 0 00:05:35.310 EAL: Detected lcore 36 as core 0 on socket 1 00:05:35.310 EAL: Detected lcore 37 as core 1 on socket 1 00:05:35.310 EAL: Detected lcore 38 as core 2 on socket 1 00:05:35.310 EAL: Detected lcore 39 as core 3 on socket 1 00:05:35.310 EAL: Detected lcore 40 as core 4 on socket 1 00:05:35.310 EAL: Detected lcore 41 as core 5 on socket 1 00:05:35.310 EAL: Detected lcore 42 as core 8 on socket 1 00:05:35.310 EAL: Detected lcore 43 as core 9 on socket 1 00:05:35.310 EAL: Detected lcore 44 as core 10 on socket 1 00:05:35.310 EAL: Detected lcore 45 as core 11 on socket 1 00:05:35.310 EAL: Detected lcore 46 as core 12 on socket 1 00:05:35.310 EAL: Detected lcore 47 as core 13 on socket 1 00:05:35.569 EAL: Maximum logical cores by configuration: 128 00:05:35.569 EAL: Detected CPU lcores: 48 00:05:35.569 EAL: Detected NUMA nodes: 2 00:05:35.569 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:35.569 EAL: Detected shared linkage of DPDK 00:05:35.569 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:35.569 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:35.569 EAL: Registered [vdev] bus. 00:05:35.569 EAL: bus.vdev log level changed from disabled to notice 00:05:35.569 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:35.569 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:35.569 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:35.569 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:35.569 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:35.569 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:35.569 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:35.569 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:35.569 EAL: No shared files mode enabled, IPC will be disabled 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Bus pci wants IOVA as 'DC' 00:05:35.569 EAL: Bus vdev wants IOVA as 'DC' 00:05:35.569 EAL: Buses did not request a specific IOVA mode. 00:05:35.569 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:35.569 EAL: Selected IOVA mode 'VA' 00:05:35.569 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.569 EAL: Probing VFIO support... 00:05:35.569 EAL: IOMMU type 1 (Type 1) is supported 00:05:35.569 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:35.569 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:35.569 EAL: VFIO support initialized 00:05:35.569 EAL: Ask a virtual area of 0x2e000 bytes 00:05:35.569 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:35.569 EAL: Setting up physically contiguous memory... 00:05:35.569 EAL: Setting maximum number of open files to 524288 00:05:35.569 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:35.569 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:35.569 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:35.569 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.569 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:35.569 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.569 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.569 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:35.569 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:35.569 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.569 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:35.569 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.569 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.569 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:35.569 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:35.569 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.569 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:35.569 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.569 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.569 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:35.569 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:35.569 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.569 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:35.569 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.569 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.569 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:35.569 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:35.569 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:35.569 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.569 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:35.569 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.569 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.569 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:35.569 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:35.569 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.569 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:35.569 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.569 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.569 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:35.569 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:35.569 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.569 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:35.569 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.569 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.569 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:35.569 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:35.569 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.569 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:35.569 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.569 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.569 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:35.569 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:35.569 EAL: Hugepages will be freed exactly as allocated. 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: TSC frequency is ~2700000 KHz 00:05:35.569 EAL: Main lcore 0 is ready (tid=7fb2d6bdda00;cpuset=[0]) 00:05:35.569 EAL: Trying to obtain current memory policy. 00:05:35.569 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.569 EAL: Restoring previous memory policy: 0 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was expanded by 2MB 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:35.569 EAL: Mem event callback 'spdk:(nil)' registered 00:05:35.569 00:05:35.569 00:05:35.569 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.569 http://cunit.sourceforge.net/ 00:05:35.569 00:05:35.569 00:05:35.569 Suite: components_suite 00:05:35.569 Test: vtophys_malloc_test ...passed 00:05:35.569 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:35.569 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.569 EAL: Restoring previous memory policy: 4 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was expanded by 4MB 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was shrunk by 4MB 00:05:35.569 EAL: Trying to obtain current memory policy. 00:05:35.569 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.569 EAL: Restoring previous memory policy: 4 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was expanded by 6MB 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was shrunk by 6MB 00:05:35.569 EAL: Trying to obtain current memory policy. 00:05:35.569 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.569 EAL: Restoring previous memory policy: 4 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was expanded by 10MB 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was shrunk by 10MB 00:05:35.569 EAL: Trying to obtain current memory policy. 00:05:35.569 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.569 EAL: Restoring previous memory policy: 4 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was expanded by 18MB 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was shrunk by 18MB 00:05:35.569 EAL: Trying to obtain current memory policy. 00:05:35.569 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.569 EAL: Restoring previous memory policy: 4 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was expanded by 34MB 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was shrunk by 34MB 00:05:35.569 EAL: Trying to obtain current memory policy. 00:05:35.569 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.569 EAL: Restoring previous memory policy: 4 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was expanded by 66MB 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was shrunk by 66MB 00:05:35.569 EAL: Trying to obtain current memory policy. 00:05:35.569 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.569 EAL: Restoring previous memory policy: 4 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was expanded by 130MB 00:05:35.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.569 EAL: request: mp_malloc_sync 00:05:35.569 EAL: No shared files mode enabled, IPC is disabled 00:05:35.569 EAL: Heap on socket 0 was shrunk by 130MB 00:05:35.569 EAL: Trying to obtain current memory policy. 00:05:35.569 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.828 EAL: Restoring previous memory policy: 4 00:05:35.828 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.828 EAL: request: mp_malloc_sync 00:05:35.828 EAL: No shared files mode enabled, IPC is disabled 00:05:35.828 EAL: Heap on socket 0 was expanded by 258MB 00:05:35.828 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.828 EAL: request: mp_malloc_sync 00:05:35.828 EAL: No shared files mode enabled, IPC is disabled 00:05:35.828 EAL: Heap on socket 0 was shrunk by 258MB 00:05:35.828 EAL: Trying to obtain current memory policy. 00:05:35.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.086 EAL: Restoring previous memory policy: 4 00:05:36.086 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.086 EAL: request: mp_malloc_sync 00:05:36.086 EAL: No shared files mode enabled, IPC is disabled 00:05:36.086 EAL: Heap on socket 0 was expanded by 514MB 00:05:36.086 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.086 EAL: request: mp_malloc_sync 00:05:36.086 EAL: No shared files mode enabled, IPC is disabled 00:05:36.086 EAL: Heap on socket 0 was shrunk by 514MB 00:05:36.086 EAL: Trying to obtain current memory policy. 00:05:36.086 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.653 EAL: Restoring previous memory policy: 4 00:05:36.653 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.653 EAL: request: mp_malloc_sync 00:05:36.653 EAL: No shared files mode enabled, IPC is disabled 00:05:36.653 EAL: Heap on socket 0 was expanded by 1026MB 00:05:36.653 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.911 EAL: request: mp_malloc_sync 00:05:36.911 EAL: No shared files mode enabled, IPC is disabled 00:05:36.911 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:36.911 passed 00:05:36.911 00:05:36.911 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.911 suites 1 1 n/a 0 0 00:05:36.911 tests 2 2 2 0 0 00:05:36.911 asserts 497 497 497 0 n/a 00:05:36.911 00:05:36.911 Elapsed time = 1.382 seconds 00:05:36.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.911 EAL: request: mp_malloc_sync 00:05:36.911 EAL: No shared files mode enabled, IPC is disabled 00:05:36.911 EAL: Heap on socket 0 was shrunk by 2MB 00:05:36.911 EAL: No shared files mode enabled, IPC is disabled 00:05:36.911 EAL: No shared files mode enabled, IPC is disabled 00:05:36.911 EAL: No shared files mode enabled, IPC is disabled 00:05:36.911 00:05:36.911 real 0m1.505s 00:05:36.911 user 0m0.863s 00:05:36.911 sys 0m0.601s 00:05:36.911 20:11:15 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.911 20:11:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:36.911 ************************************ 00:05:36.911 END TEST env_vtophys 00:05:36.911 ************************************ 00:05:36.911 20:11:15 env -- common/autotest_common.sh@1142 -- # return 0 00:05:36.911 20:11:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:36.911 20:11:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.911 20:11:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.911 20:11:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.911 ************************************ 00:05:36.911 START TEST env_pci 00:05:36.911 ************************************ 00:05:36.911 20:11:15 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:36.911 00:05:36.911 00:05:36.911 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.911 http://cunit.sourceforge.net/ 00:05:36.911 00:05:36.911 00:05:36.911 Suite: pci 00:05:36.911 Test: pci_hook ...[2024-07-15 20:11:15.383971] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3917068 has claimed it 00:05:36.911 EAL: Cannot find device (10000:00:01.0) 00:05:36.911 EAL: Failed to attach device on primary process 00:05:36.911 passed 00:05:36.911 00:05:36.911 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.911 suites 1 1 n/a 0 0 00:05:36.911 tests 1 1 1 0 0 00:05:36.911 asserts 25 25 25 0 n/a 00:05:36.911 00:05:36.911 Elapsed time = 0.020 seconds 00:05:36.911 00:05:36.911 real 0m0.031s 00:05:36.911 user 0m0.009s 00:05:36.911 sys 0m0.022s 00:05:36.911 20:11:15 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.911 20:11:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:36.911 ************************************ 00:05:36.911 END TEST env_pci 00:05:36.911 ************************************ 00:05:36.911 20:11:15 env -- common/autotest_common.sh@1142 -- # return 0 00:05:36.911 20:11:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:36.911 20:11:15 env -- env/env.sh@15 -- # uname 00:05:36.911 20:11:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:36.911 20:11:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:36.911 20:11:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.911 20:11:15 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:36.911 20:11:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.911 20:11:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.170 ************************************ 00:05:37.170 START TEST env_dpdk_post_init 00:05:37.170 ************************************ 00:05:37.170 20:11:15 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:37.170 EAL: Detected CPU lcores: 48 00:05:37.170 EAL: Detected NUMA nodes: 2 00:05:37.170 EAL: Detected shared linkage of DPDK 00:05:37.170 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.170 EAL: Selected IOVA mode 'VA' 00:05:37.170 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.170 EAL: VFIO support initialized 00:05:37.170 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.170 EAL: Using IOMMU type 1 (Type 1) 00:05:37.170 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:37.170 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:37.170 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:37.170 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:37.170 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:37.170 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:37.170 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:37.170 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:37.170 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:37.170 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:37.170 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:37.170 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:37.429 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:37.429 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:37.429 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:37.429 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:37.995 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:41.272 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:41.272 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:41.531 Starting DPDK initialization... 00:05:41.531 Starting SPDK post initialization... 00:05:41.531 SPDK NVMe probe 00:05:41.531 Attaching to 0000:88:00.0 00:05:41.531 Attached to 0000:88:00.0 00:05:41.531 Cleaning up... 00:05:41.531 00:05:41.531 real 0m4.377s 00:05:41.531 user 0m3.263s 00:05:41.531 sys 0m0.171s 00:05:41.531 20:11:19 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.531 20:11:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.531 ************************************ 00:05:41.531 END TEST env_dpdk_post_init 00:05:41.531 ************************************ 00:05:41.531 20:11:19 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.531 20:11:19 env -- env/env.sh@26 -- # uname 00:05:41.531 20:11:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:41.532 20:11:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.532 20:11:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.532 20:11:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.532 20:11:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.532 ************************************ 00:05:41.532 START TEST env_mem_callbacks 00:05:41.532 ************************************ 00:05:41.532 20:11:19 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.532 EAL: Detected CPU lcores: 48 00:05:41.532 EAL: Detected NUMA nodes: 2 00:05:41.532 EAL: Detected shared linkage of DPDK 00:05:41.532 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.532 EAL: Selected IOVA mode 'VA' 00:05:41.532 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.532 EAL: VFIO support initialized 00:05:41.532 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.532 00:05:41.532 00:05:41.532 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.532 http://cunit.sourceforge.net/ 00:05:41.532 00:05:41.532 00:05:41.532 Suite: memory 00:05:41.532 Test: test ... 00:05:41.532 register 0x200000200000 2097152 00:05:41.532 malloc 3145728 00:05:41.532 register 0x200000400000 4194304 00:05:41.532 buf 0x200000500000 len 3145728 PASSED 00:05:41.532 malloc 64 00:05:41.532 buf 0x2000004fff40 len 64 PASSED 00:05:41.532 malloc 4194304 00:05:41.532 register 0x200000800000 6291456 00:05:41.532 buf 0x200000a00000 len 4194304 PASSED 00:05:41.532 free 0x200000500000 3145728 00:05:41.532 free 0x2000004fff40 64 00:05:41.532 unregister 0x200000400000 4194304 PASSED 00:05:41.532 free 0x200000a00000 4194304 00:05:41.532 unregister 0x200000800000 6291456 PASSED 00:05:41.532 malloc 8388608 00:05:41.532 register 0x200000400000 10485760 00:05:41.532 buf 0x200000600000 len 8388608 PASSED 00:05:41.532 free 0x200000600000 8388608 00:05:41.532 unregister 0x200000400000 10485760 PASSED 00:05:41.532 passed 00:05:41.532 00:05:41.532 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.532 suites 1 1 n/a 0 0 00:05:41.532 tests 1 1 1 0 0 00:05:41.532 asserts 15 15 15 0 n/a 00:05:41.532 00:05:41.532 Elapsed time = 0.005 seconds 00:05:41.532 00:05:41.532 real 0m0.048s 00:05:41.532 user 0m0.010s 00:05:41.532 sys 0m0.038s 00:05:41.532 20:11:19 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.532 20:11:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:41.532 ************************************ 00:05:41.532 END TEST env_mem_callbacks 00:05:41.532 ************************************ 00:05:41.532 20:11:19 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.532 00:05:41.532 real 0m6.393s 00:05:41.532 user 0m4.390s 00:05:41.532 sys 0m1.035s 00:05:41.532 20:11:19 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.532 20:11:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.532 ************************************ 00:05:41.532 END TEST env 00:05:41.532 ************************************ 00:05:41.532 20:11:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.532 20:11:19 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.532 20:11:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.532 20:11:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.532 20:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:41.532 ************************************ 00:05:41.532 START TEST rpc 00:05:41.532 ************************************ 00:05:41.532 20:11:19 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.532 * Looking for test storage... 00:05:41.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.532 20:11:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3917717 00:05:41.532 20:11:20 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:41.532 20:11:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.532 20:11:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3917717 00:05:41.532 20:11:20 rpc -- common/autotest_common.sh@829 -- # '[' -z 3917717 ']' 00:05:41.532 20:11:20 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.532 20:11:20 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.532 20:11:20 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.532 20:11:20 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.532 20:11:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.790 [2024-07-15 20:11:20.099036] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:05:41.790 [2024-07-15 20:11:20.099142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917717 ] 00:05:41.790 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.790 [2024-07-15 20:11:20.155599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.790 [2024-07-15 20:11:20.250676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.790 [2024-07-15 20:11:20.250724] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3917717' to capture a snapshot of events at runtime. 00:05:41.790 [2024-07-15 20:11:20.250753] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.790 [2024-07-15 20:11:20.250765] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.790 [2024-07-15 20:11:20.250775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3917717 for offline analysis/debug. 00:05:41.790 [2024-07-15 20:11:20.250806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.047 20:11:20 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.047 20:11:20 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.047 20:11:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:42.047 20:11:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:42.047 20:11:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:42.047 20:11:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:42.047 20:11:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.047 20:11:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.047 20:11:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.047 ************************************ 00:05:42.047 START TEST rpc_integrity 00:05:42.047 ************************************ 00:05:42.047 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:42.047 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.047 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.047 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.047 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.047 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.047 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.047 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.047 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.047 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.047 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.305 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.305 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:42.305 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.305 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.305 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.305 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.305 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.305 { 00:05:42.305 "name": "Malloc0", 00:05:42.305 "aliases": [ 00:05:42.305 "3c37819f-ec11-4382-9b8a-4906ad4092a6" 00:05:42.305 ], 00:05:42.305 "product_name": "Malloc disk", 00:05:42.305 "block_size": 512, 00:05:42.305 "num_blocks": 16384, 00:05:42.305 "uuid": "3c37819f-ec11-4382-9b8a-4906ad4092a6", 00:05:42.305 "assigned_rate_limits": { 00:05:42.305 "rw_ios_per_sec": 0, 00:05:42.305 "rw_mbytes_per_sec": 0, 00:05:42.305 "r_mbytes_per_sec": 0, 00:05:42.305 "w_mbytes_per_sec": 0 00:05:42.305 }, 00:05:42.305 "claimed": false, 00:05:42.305 "zoned": false, 00:05:42.305 "supported_io_types": { 00:05:42.305 "read": true, 00:05:42.305 "write": true, 00:05:42.305 "unmap": true, 00:05:42.305 "flush": true, 00:05:42.305 "reset": true, 00:05:42.305 "nvme_admin": false, 00:05:42.305 "nvme_io": false, 00:05:42.305 "nvme_io_md": false, 00:05:42.305 "write_zeroes": true, 00:05:42.305 "zcopy": true, 00:05:42.305 "get_zone_info": false, 00:05:42.305 "zone_management": false, 00:05:42.305 "zone_append": false, 00:05:42.305 "compare": false, 00:05:42.305 "compare_and_write": false, 00:05:42.305 "abort": true, 00:05:42.305 "seek_hole": false, 00:05:42.305 "seek_data": false, 00:05:42.305 "copy": true, 00:05:42.305 "nvme_iov_md": false 00:05:42.305 }, 00:05:42.305 "memory_domains": [ 00:05:42.305 { 00:05:42.305 "dma_device_id": "system", 00:05:42.305 "dma_device_type": 1 00:05:42.305 }, 00:05:42.305 { 00:05:42.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.305 "dma_device_type": 2 00:05:42.305 } 00:05:42.305 ], 00:05:42.305 "driver_specific": {} 00:05:42.305 } 00:05:42.305 ]' 00:05:42.305 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.305 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.305 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:42.305 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.305 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.305 [2024-07-15 20:11:20.640089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:42.305 [2024-07-15 20:11:20.640130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.305 [2024-07-15 20:11:20.640171] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa52af0 00:05:42.305 [2024-07-15 20:11:20.640188] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.305 [2024-07-15 20:11:20.641651] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.305 [2024-07-15 20:11:20.641678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.305 Passthru0 00:05:42.305 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.305 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.305 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.305 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.305 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.305 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.305 { 00:05:42.305 "name": "Malloc0", 00:05:42.305 "aliases": [ 00:05:42.305 "3c37819f-ec11-4382-9b8a-4906ad4092a6" 00:05:42.305 ], 00:05:42.305 "product_name": "Malloc disk", 00:05:42.305 "block_size": 512, 00:05:42.305 "num_blocks": 16384, 00:05:42.305 "uuid": "3c37819f-ec11-4382-9b8a-4906ad4092a6", 00:05:42.305 "assigned_rate_limits": { 00:05:42.305 "rw_ios_per_sec": 0, 00:05:42.305 "rw_mbytes_per_sec": 0, 00:05:42.305 "r_mbytes_per_sec": 0, 00:05:42.305 "w_mbytes_per_sec": 0 00:05:42.305 }, 00:05:42.305 "claimed": true, 00:05:42.305 "claim_type": "exclusive_write", 00:05:42.305 "zoned": false, 00:05:42.305 "supported_io_types": { 00:05:42.305 "read": true, 00:05:42.305 "write": true, 00:05:42.305 "unmap": true, 00:05:42.305 "flush": true, 00:05:42.305 "reset": true, 00:05:42.305 "nvme_admin": false, 00:05:42.305 "nvme_io": false, 00:05:42.305 "nvme_io_md": false, 00:05:42.305 "write_zeroes": true, 00:05:42.305 "zcopy": true, 00:05:42.306 "get_zone_info": false, 00:05:42.306 "zone_management": false, 00:05:42.306 "zone_append": false, 00:05:42.306 "compare": false, 00:05:42.306 "compare_and_write": false, 00:05:42.306 "abort": true, 00:05:42.306 "seek_hole": false, 00:05:42.306 "seek_data": false, 00:05:42.306 "copy": true, 00:05:42.306 "nvme_iov_md": false 00:05:42.306 }, 00:05:42.306 "memory_domains": [ 00:05:42.306 { 00:05:42.306 "dma_device_id": "system", 00:05:42.306 "dma_device_type": 1 00:05:42.306 }, 00:05:42.306 { 00:05:42.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.306 "dma_device_type": 2 00:05:42.306 } 00:05:42.306 ], 00:05:42.306 "driver_specific": {} 00:05:42.306 }, 00:05:42.306 { 00:05:42.306 "name": "Passthru0", 00:05:42.306 "aliases": [ 00:05:42.306 "4f49dd42-452f-5b18-986f-0543d8a6e421" 00:05:42.306 ], 00:05:42.306 "product_name": "passthru", 00:05:42.306 "block_size": 512, 00:05:42.306 "num_blocks": 16384, 00:05:42.306 "uuid": "4f49dd42-452f-5b18-986f-0543d8a6e421", 00:05:42.306 "assigned_rate_limits": { 00:05:42.306 "rw_ios_per_sec": 0, 00:05:42.306 "rw_mbytes_per_sec": 0, 00:05:42.306 "r_mbytes_per_sec": 0, 00:05:42.306 "w_mbytes_per_sec": 0 00:05:42.306 }, 00:05:42.306 "claimed": false, 00:05:42.306 "zoned": false, 00:05:42.306 "supported_io_types": { 00:05:42.306 "read": true, 00:05:42.306 "write": true, 00:05:42.306 "unmap": true, 00:05:42.306 "flush": true, 00:05:42.306 "reset": true, 00:05:42.306 "nvme_admin": false, 00:05:42.306 "nvme_io": false, 00:05:42.306 "nvme_io_md": false, 00:05:42.306 "write_zeroes": true, 00:05:42.306 "zcopy": true, 00:05:42.306 "get_zone_info": false, 00:05:42.306 "zone_management": false, 00:05:42.306 "zone_append": false, 00:05:42.306 "compare": false, 00:05:42.306 "compare_and_write": false, 00:05:42.306 "abort": true, 00:05:42.306 "seek_hole": false, 00:05:42.306 "seek_data": false, 00:05:42.306 "copy": true, 00:05:42.306 "nvme_iov_md": false 00:05:42.306 }, 00:05:42.306 "memory_domains": [ 00:05:42.306 { 00:05:42.306 "dma_device_id": "system", 00:05:42.306 "dma_device_type": 1 00:05:42.306 }, 00:05:42.306 { 00:05:42.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.306 "dma_device_type": 2 00:05:42.306 } 00:05:42.306 ], 00:05:42.306 "driver_specific": { 00:05:42.306 "passthru": { 00:05:42.306 "name": "Passthru0", 00:05:42.306 "base_bdev_name": "Malloc0" 00:05:42.306 } 00:05:42.306 } 00:05:42.306 } 00:05:42.306 ]' 00:05:42.306 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.306 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.306 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.306 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.306 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.306 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.306 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:42.306 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.306 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.306 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.306 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.306 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.306 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.306 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.306 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.306 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.306 20:11:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.306 00:05:42.306 real 0m0.229s 00:05:42.306 user 0m0.150s 00:05:42.306 sys 0m0.025s 00:05:42.306 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.306 20:11:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.306 ************************************ 00:05:42.306 END TEST rpc_integrity 00:05:42.306 ************************************ 00:05:42.306 20:11:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.306 20:11:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:42.306 20:11:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.306 20:11:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.306 20:11:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.306 ************************************ 00:05:42.306 START TEST rpc_plugins 00:05:42.306 ************************************ 00:05:42.306 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:42.306 20:11:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:42.306 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.306 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.306 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.306 20:11:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:42.306 20:11:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:42.306 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.306 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.306 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.306 20:11:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:42.306 { 00:05:42.306 "name": "Malloc1", 00:05:42.306 "aliases": [ 00:05:42.306 "3e807263-939d-431c-9484-74fbe8047599" 00:05:42.306 ], 00:05:42.306 "product_name": "Malloc disk", 00:05:42.306 "block_size": 4096, 00:05:42.306 "num_blocks": 256, 00:05:42.306 "uuid": "3e807263-939d-431c-9484-74fbe8047599", 00:05:42.306 "assigned_rate_limits": { 00:05:42.306 "rw_ios_per_sec": 0, 00:05:42.306 "rw_mbytes_per_sec": 0, 00:05:42.306 "r_mbytes_per_sec": 0, 00:05:42.306 "w_mbytes_per_sec": 0 00:05:42.306 }, 00:05:42.306 "claimed": false, 00:05:42.306 "zoned": false, 00:05:42.306 "supported_io_types": { 00:05:42.306 "read": true, 00:05:42.306 "write": true, 00:05:42.306 "unmap": true, 00:05:42.306 "flush": true, 00:05:42.306 "reset": true, 00:05:42.306 "nvme_admin": false, 00:05:42.306 "nvme_io": false, 00:05:42.306 "nvme_io_md": false, 00:05:42.306 "write_zeroes": true, 00:05:42.306 "zcopy": true, 00:05:42.306 "get_zone_info": false, 00:05:42.306 "zone_management": false, 00:05:42.306 "zone_append": false, 00:05:42.306 "compare": false, 00:05:42.306 "compare_and_write": false, 00:05:42.306 "abort": true, 00:05:42.306 "seek_hole": false, 00:05:42.306 "seek_data": false, 00:05:42.306 "copy": true, 00:05:42.306 "nvme_iov_md": false 00:05:42.306 }, 00:05:42.306 "memory_domains": [ 00:05:42.306 { 00:05:42.306 "dma_device_id": "system", 00:05:42.306 "dma_device_type": 1 00:05:42.306 }, 00:05:42.306 { 00:05:42.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.306 "dma_device_type": 2 00:05:42.306 } 00:05:42.306 ], 00:05:42.306 "driver_specific": {} 00:05:42.306 } 00:05:42.306 ]' 00:05:42.306 20:11:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:42.565 20:11:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:42.565 20:11:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:42.565 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.565 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.565 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.565 20:11:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:42.565 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.565 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.565 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.565 20:11:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:42.565 20:11:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:42.565 20:11:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:42.565 00:05:42.565 real 0m0.114s 00:05:42.565 user 0m0.078s 00:05:42.565 sys 0m0.009s 00:05:42.565 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.565 20:11:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.565 ************************************ 00:05:42.565 END TEST rpc_plugins 00:05:42.565 ************************************ 00:05:42.565 20:11:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.565 20:11:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:42.565 20:11:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.565 20:11:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.565 20:11:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.565 ************************************ 00:05:42.565 START TEST rpc_trace_cmd_test 00:05:42.565 ************************************ 00:05:42.565 20:11:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:42.565 20:11:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:42.565 20:11:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:42.565 20:11:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.565 20:11:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.565 20:11:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.565 20:11:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:42.565 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3917717", 00:05:42.565 "tpoint_group_mask": "0x8", 00:05:42.565 "iscsi_conn": { 00:05:42.565 "mask": "0x2", 00:05:42.565 "tpoint_mask": "0x0" 00:05:42.565 }, 00:05:42.565 "scsi": { 00:05:42.565 "mask": "0x4", 00:05:42.565 "tpoint_mask": "0x0" 00:05:42.565 }, 00:05:42.565 "bdev": { 00:05:42.565 "mask": "0x8", 00:05:42.565 "tpoint_mask": "0xffffffffffffffff" 00:05:42.565 }, 00:05:42.565 "nvmf_rdma": { 00:05:42.565 "mask": "0x10", 00:05:42.565 "tpoint_mask": "0x0" 00:05:42.565 }, 00:05:42.565 "nvmf_tcp": { 00:05:42.565 "mask": "0x20", 00:05:42.565 "tpoint_mask": "0x0" 00:05:42.565 }, 00:05:42.565 "ftl": { 00:05:42.565 "mask": "0x40", 00:05:42.565 "tpoint_mask": "0x0" 00:05:42.565 }, 00:05:42.565 "blobfs": { 00:05:42.565 "mask": "0x80", 00:05:42.565 "tpoint_mask": "0x0" 00:05:42.565 }, 00:05:42.565 "dsa": { 00:05:42.565 "mask": "0x200", 00:05:42.565 "tpoint_mask": "0x0" 00:05:42.565 }, 00:05:42.565 "thread": { 00:05:42.565 "mask": "0x400", 00:05:42.565 "tpoint_mask": "0x0" 00:05:42.565 }, 00:05:42.565 "nvme_pcie": { 00:05:42.565 "mask": "0x800", 00:05:42.565 "tpoint_mask": "0x0" 00:05:42.565 }, 00:05:42.565 "iaa": { 00:05:42.565 "mask": "0x1000", 00:05:42.565 "tpoint_mask": "0x0" 00:05:42.565 }, 00:05:42.565 "nvme_tcp": { 00:05:42.565 "mask": "0x2000", 00:05:42.565 "tpoint_mask": "0x0" 00:05:42.565 }, 00:05:42.565 "bdev_nvme": { 00:05:42.565 "mask": "0x4000", 00:05:42.565 "tpoint_mask": "0x0" 00:05:42.565 }, 00:05:42.565 "sock": { 00:05:42.565 "mask": "0x8000", 00:05:42.565 "tpoint_mask": "0x0" 00:05:42.565 } 00:05:42.565 }' 00:05:42.565 20:11:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:42.565 20:11:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:42.565 20:11:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.565 20:11:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.565 20:11:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.565 20:11:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.565 20:11:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.824 20:11:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.824 20:11:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.824 20:11:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.824 00:05:42.824 real 0m0.193s 00:05:42.824 user 0m0.171s 00:05:42.824 sys 0m0.016s 00:05:42.824 20:11:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.824 20:11:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.824 ************************************ 00:05:42.824 END TEST rpc_trace_cmd_test 00:05:42.824 ************************************ 00:05:42.824 20:11:21 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.824 20:11:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.824 20:11:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.824 20:11:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.824 20:11:21 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.824 20:11:21 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.824 20:11:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.824 ************************************ 00:05:42.824 START TEST rpc_daemon_integrity 00:05:42.824 ************************************ 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.824 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.824 { 00:05:42.824 "name": "Malloc2", 00:05:42.824 "aliases": [ 00:05:42.824 "8b60ef6f-cf2c-4385-867a-fe70338109e1" 00:05:42.824 ], 00:05:42.824 "product_name": "Malloc disk", 00:05:42.824 "block_size": 512, 00:05:42.824 "num_blocks": 16384, 00:05:42.824 "uuid": "8b60ef6f-cf2c-4385-867a-fe70338109e1", 00:05:42.824 "assigned_rate_limits": { 00:05:42.824 "rw_ios_per_sec": 0, 00:05:42.824 "rw_mbytes_per_sec": 0, 00:05:42.824 "r_mbytes_per_sec": 0, 00:05:42.824 "w_mbytes_per_sec": 0 00:05:42.824 }, 00:05:42.824 "claimed": false, 00:05:42.824 "zoned": false, 00:05:42.825 "supported_io_types": { 00:05:42.825 "read": true, 00:05:42.825 "write": true, 00:05:42.825 "unmap": true, 00:05:42.825 "flush": true, 00:05:42.825 "reset": true, 00:05:42.825 "nvme_admin": false, 00:05:42.825 "nvme_io": false, 00:05:42.825 "nvme_io_md": false, 00:05:42.825 "write_zeroes": true, 00:05:42.825 "zcopy": true, 00:05:42.825 "get_zone_info": false, 00:05:42.825 "zone_management": false, 00:05:42.825 "zone_append": false, 00:05:42.825 "compare": false, 00:05:42.825 "compare_and_write": false, 00:05:42.825 "abort": true, 00:05:42.825 "seek_hole": false, 00:05:42.825 "seek_data": false, 00:05:42.825 "copy": true, 00:05:42.825 "nvme_iov_md": false 00:05:42.825 }, 00:05:42.825 "memory_domains": [ 00:05:42.825 { 00:05:42.825 "dma_device_id": "system", 00:05:42.825 "dma_device_type": 1 00:05:42.825 }, 00:05:42.825 { 00:05:42.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.825 "dma_device_type": 2 00:05:42.825 } 00:05:42.825 ], 00:05:42.825 "driver_specific": {} 00:05:42.825 } 00:05:42.825 ]' 00:05:42.825 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.825 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.825 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.825 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.825 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.825 [2024-07-15 20:11:21.314058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.825 [2024-07-15 20:11:21.314097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.825 [2024-07-15 20:11:21.314118] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8a23d0 00:05:42.825 [2024-07-15 20:11:21.314131] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.825 [2024-07-15 20:11:21.315403] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.825 [2024-07-15 20:11:21.315431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.825 Passthru0 00:05:42.825 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.825 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.825 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.825 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.825 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.825 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.825 { 00:05:42.825 "name": "Malloc2", 00:05:42.825 "aliases": [ 00:05:42.825 "8b60ef6f-cf2c-4385-867a-fe70338109e1" 00:05:42.825 ], 00:05:42.825 "product_name": "Malloc disk", 00:05:42.825 "block_size": 512, 00:05:42.825 "num_blocks": 16384, 00:05:42.825 "uuid": "8b60ef6f-cf2c-4385-867a-fe70338109e1", 00:05:42.825 "assigned_rate_limits": { 00:05:42.825 "rw_ios_per_sec": 0, 00:05:42.825 "rw_mbytes_per_sec": 0, 00:05:42.825 "r_mbytes_per_sec": 0, 00:05:42.825 "w_mbytes_per_sec": 0 00:05:42.825 }, 00:05:42.825 "claimed": true, 00:05:42.825 "claim_type": "exclusive_write", 00:05:42.825 "zoned": false, 00:05:42.825 "supported_io_types": { 00:05:42.825 "read": true, 00:05:42.825 "write": true, 00:05:42.825 "unmap": true, 00:05:42.825 "flush": true, 00:05:42.825 "reset": true, 00:05:42.825 "nvme_admin": false, 00:05:42.825 "nvme_io": false, 00:05:42.825 "nvme_io_md": false, 00:05:42.825 "write_zeroes": true, 00:05:42.825 "zcopy": true, 00:05:42.825 "get_zone_info": false, 00:05:42.825 "zone_management": false, 00:05:42.825 "zone_append": false, 00:05:42.825 "compare": false, 00:05:42.825 "compare_and_write": false, 00:05:42.825 "abort": true, 00:05:42.825 "seek_hole": false, 00:05:42.825 "seek_data": false, 00:05:42.825 "copy": true, 00:05:42.825 "nvme_iov_md": false 00:05:42.825 }, 00:05:42.825 "memory_domains": [ 00:05:42.825 { 00:05:42.825 "dma_device_id": "system", 00:05:42.825 "dma_device_type": 1 00:05:42.825 }, 00:05:42.825 { 00:05:42.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.825 "dma_device_type": 2 00:05:42.825 } 00:05:42.825 ], 00:05:42.825 "driver_specific": {} 00:05:42.825 }, 00:05:42.825 { 00:05:42.825 "name": "Passthru0", 00:05:42.825 "aliases": [ 00:05:42.825 "f8a9608b-08aa-544b-a31b-d037c8676c97" 00:05:42.825 ], 00:05:42.825 "product_name": "passthru", 00:05:42.825 "block_size": 512, 00:05:42.825 "num_blocks": 16384, 00:05:42.825 "uuid": "f8a9608b-08aa-544b-a31b-d037c8676c97", 00:05:42.825 "assigned_rate_limits": { 00:05:42.825 "rw_ios_per_sec": 0, 00:05:42.825 "rw_mbytes_per_sec": 0, 00:05:42.825 "r_mbytes_per_sec": 0, 00:05:42.825 "w_mbytes_per_sec": 0 00:05:42.825 }, 00:05:42.825 "claimed": false, 00:05:42.825 "zoned": false, 00:05:42.825 "supported_io_types": { 00:05:42.825 "read": true, 00:05:42.825 "write": true, 00:05:42.825 "unmap": true, 00:05:42.825 "flush": true, 00:05:42.825 "reset": true, 00:05:42.825 "nvme_admin": false, 00:05:42.825 "nvme_io": false, 00:05:42.825 "nvme_io_md": false, 00:05:42.825 "write_zeroes": true, 00:05:42.825 "zcopy": true, 00:05:42.825 "get_zone_info": false, 00:05:42.825 "zone_management": false, 00:05:42.825 "zone_append": false, 00:05:42.825 "compare": false, 00:05:42.825 "compare_and_write": false, 00:05:42.825 "abort": true, 00:05:42.825 "seek_hole": false, 00:05:42.825 "seek_data": false, 00:05:42.825 "copy": true, 00:05:42.825 "nvme_iov_md": false 00:05:42.825 }, 00:05:42.825 "memory_domains": [ 00:05:42.825 { 00:05:42.825 "dma_device_id": "system", 00:05:42.825 "dma_device_type": 1 00:05:42.825 }, 00:05:42.825 { 00:05:42.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.825 "dma_device_type": 2 00:05:42.825 } 00:05:42.825 ], 00:05:42.825 "driver_specific": { 00:05:42.825 "passthru": { 00:05:42.825 "name": "Passthru0", 00:05:42.825 "base_bdev_name": "Malloc2" 00:05:42.825 } 00:05:42.825 } 00:05:42.825 } 00:05:42.825 ]' 00:05:42.825 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.084 00:05:43.084 real 0m0.233s 00:05:43.084 user 0m0.152s 00:05:43.084 sys 0m0.025s 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.084 20:11:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.084 ************************************ 00:05:43.084 END TEST rpc_daemon_integrity 00:05:43.084 ************************************ 00:05:43.084 20:11:21 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.084 20:11:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:43.084 20:11:21 rpc -- rpc/rpc.sh@84 -- # killprocess 3917717 00:05:43.084 20:11:21 rpc -- common/autotest_common.sh@948 -- # '[' -z 3917717 ']' 00:05:43.084 20:11:21 rpc -- common/autotest_common.sh@952 -- # kill -0 3917717 00:05:43.084 20:11:21 rpc -- common/autotest_common.sh@953 -- # uname 00:05:43.084 20:11:21 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.084 20:11:21 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3917717 00:05:43.084 20:11:21 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.084 20:11:21 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.084 20:11:21 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3917717' 00:05:43.084 killing process with pid 3917717 00:05:43.084 20:11:21 rpc -- common/autotest_common.sh@967 -- # kill 3917717 00:05:43.084 20:11:21 rpc -- common/autotest_common.sh@972 -- # wait 3917717 00:05:43.659 00:05:43.659 real 0m1.896s 00:05:43.659 user 0m2.386s 00:05:43.659 sys 0m0.596s 00:05:43.659 20:11:21 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.659 20:11:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.659 ************************************ 00:05:43.659 END TEST rpc 00:05:43.659 ************************************ 00:05:43.659 20:11:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.659 20:11:21 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.659 20:11:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.659 20:11:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.659 20:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.659 ************************************ 00:05:43.659 START TEST skip_rpc 00:05:43.659 ************************************ 00:05:43.659 20:11:21 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.659 * Looking for test storage... 00:05:43.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.659 20:11:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:43.659 20:11:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:43.659 20:11:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:43.659 20:11:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.659 20:11:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.659 20:11:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.659 ************************************ 00:05:43.659 START TEST skip_rpc 00:05:43.659 ************************************ 00:05:43.659 20:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:43.659 20:11:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3918156 00:05:43.659 20:11:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:43.659 20:11:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.659 20:11:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.659 [2024-07-15 20:11:22.067608] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:05:43.659 [2024-07-15 20:11:22.067685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918156 ] 00:05:43.659 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.659 [2024-07-15 20:11:22.127072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.922 [2024-07-15 20:11:22.217156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3918156 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3918156 ']' 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3918156 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3918156 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3918156' 00:05:49.204 killing process with pid 3918156 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3918156 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3918156 00:05:49.204 00:05:49.204 real 0m5.429s 00:05:49.204 user 0m5.117s 00:05:49.204 sys 0m0.316s 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.204 20:11:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.204 ************************************ 00:05:49.204 END TEST skip_rpc 00:05:49.204 ************************************ 00:05:49.204 20:11:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:49.204 20:11:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:49.204 20:11:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.204 20:11:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.204 20:11:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.204 ************************************ 00:05:49.204 START TEST skip_rpc_with_json 00:05:49.204 ************************************ 00:05:49.204 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:49.204 20:11:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:49.204 20:11:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3918844 00:05:49.204 20:11:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.204 20:11:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.205 20:11:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3918844 00:05:49.205 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3918844 ']' 00:05:49.205 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.205 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.205 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.205 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.205 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.205 [2024-07-15 20:11:27.544293] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:05:49.205 [2024-07-15 20:11:27.544396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918844 ] 00:05:49.205 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.205 [2024-07-15 20:11:27.617995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.205 [2024-07-15 20:11:27.717692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.463 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.463 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:49.463 20:11:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:49.463 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.463 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.463 [2024-07-15 20:11:27.981930] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:49.463 request: 00:05:49.463 { 00:05:49.463 "trtype": "tcp", 00:05:49.463 "method": "nvmf_get_transports", 00:05:49.463 "req_id": 1 00:05:49.463 } 00:05:49.463 Got JSON-RPC error response 00:05:49.463 response: 00:05:49.463 { 00:05:49.463 "code": -19, 00:05:49.463 "message": "No such device" 00:05:49.463 } 00:05:49.463 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:49.463 20:11:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:49.463 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.463 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.463 [2024-07-15 20:11:27.990046] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.463 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.463 20:11:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:49.463 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.463 20:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.721 20:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.722 20:11:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:49.722 { 00:05:49.722 "subsystems": [ 00:05:49.722 { 00:05:49.722 "subsystem": "vfio_user_target", 00:05:49.722 "config": null 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "keyring", 00:05:49.722 "config": [] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "iobuf", 00:05:49.722 "config": [ 00:05:49.722 { 00:05:49.722 "method": "iobuf_set_options", 00:05:49.722 "params": { 00:05:49.722 "small_pool_count": 8192, 00:05:49.722 "large_pool_count": 1024, 00:05:49.722 "small_bufsize": 8192, 00:05:49.722 "large_bufsize": 135168 00:05:49.722 } 00:05:49.722 } 00:05:49.722 ] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "sock", 00:05:49.722 "config": [ 00:05:49.722 { 00:05:49.722 "method": "sock_set_default_impl", 00:05:49.722 "params": { 00:05:49.722 "impl_name": "posix" 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "sock_impl_set_options", 00:05:49.722 "params": { 00:05:49.722 "impl_name": "ssl", 00:05:49.722 "recv_buf_size": 4096, 00:05:49.722 "send_buf_size": 4096, 00:05:49.722 "enable_recv_pipe": true, 00:05:49.722 "enable_quickack": false, 00:05:49.722 "enable_placement_id": 0, 00:05:49.722 "enable_zerocopy_send_server": true, 00:05:49.722 "enable_zerocopy_send_client": false, 00:05:49.722 "zerocopy_threshold": 0, 00:05:49.722 "tls_version": 0, 00:05:49.722 "enable_ktls": false 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "sock_impl_set_options", 00:05:49.722 "params": { 00:05:49.722 "impl_name": "posix", 00:05:49.722 "recv_buf_size": 2097152, 00:05:49.722 "send_buf_size": 2097152, 00:05:49.722 "enable_recv_pipe": true, 00:05:49.722 "enable_quickack": false, 00:05:49.722 "enable_placement_id": 0, 00:05:49.722 "enable_zerocopy_send_server": true, 00:05:49.722 "enable_zerocopy_send_client": false, 00:05:49.722 "zerocopy_threshold": 0, 00:05:49.722 "tls_version": 0, 00:05:49.722 "enable_ktls": false 00:05:49.722 } 00:05:49.722 } 00:05:49.722 ] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "vmd", 00:05:49.722 "config": [] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "accel", 00:05:49.722 "config": [ 00:05:49.722 { 00:05:49.722 "method": "accel_set_options", 00:05:49.722 "params": { 00:05:49.722 "small_cache_size": 128, 00:05:49.722 "large_cache_size": 16, 00:05:49.722 "task_count": 2048, 00:05:49.722 "sequence_count": 2048, 00:05:49.722 "buf_count": 2048 00:05:49.722 } 00:05:49.722 } 00:05:49.722 ] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "bdev", 00:05:49.722 "config": [ 00:05:49.722 { 00:05:49.722 "method": "bdev_set_options", 00:05:49.722 "params": { 00:05:49.722 "bdev_io_pool_size": 65535, 00:05:49.722 "bdev_io_cache_size": 256, 00:05:49.722 "bdev_auto_examine": true, 00:05:49.722 "iobuf_small_cache_size": 128, 00:05:49.722 "iobuf_large_cache_size": 16 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "bdev_raid_set_options", 00:05:49.722 "params": { 00:05:49.722 "process_window_size_kb": 1024 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "bdev_iscsi_set_options", 00:05:49.722 "params": { 00:05:49.722 "timeout_sec": 30 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "bdev_nvme_set_options", 00:05:49.722 "params": { 00:05:49.722 "action_on_timeout": "none", 00:05:49.722 "timeout_us": 0, 00:05:49.722 "timeout_admin_us": 0, 00:05:49.722 "keep_alive_timeout_ms": 10000, 00:05:49.722 "arbitration_burst": 0, 00:05:49.722 "low_priority_weight": 0, 00:05:49.722 "medium_priority_weight": 0, 00:05:49.722 "high_priority_weight": 0, 00:05:49.722 "nvme_adminq_poll_period_us": 10000, 00:05:49.722 "nvme_ioq_poll_period_us": 0, 00:05:49.722 "io_queue_requests": 0, 00:05:49.722 "delay_cmd_submit": true, 00:05:49.722 "transport_retry_count": 4, 00:05:49.722 "bdev_retry_count": 3, 00:05:49.722 "transport_ack_timeout": 0, 00:05:49.722 "ctrlr_loss_timeout_sec": 0, 00:05:49.722 "reconnect_delay_sec": 0, 00:05:49.722 "fast_io_fail_timeout_sec": 0, 00:05:49.722 "disable_auto_failback": false, 00:05:49.722 "generate_uuids": false, 00:05:49.722 "transport_tos": 0, 00:05:49.722 "nvme_error_stat": false, 00:05:49.722 "rdma_srq_size": 0, 00:05:49.722 "io_path_stat": false, 00:05:49.722 "allow_accel_sequence": false, 00:05:49.722 "rdma_max_cq_size": 0, 00:05:49.722 "rdma_cm_event_timeout_ms": 0, 00:05:49.722 "dhchap_digests": [ 00:05:49.722 "sha256", 00:05:49.722 "sha384", 00:05:49.722 "sha512" 00:05:49.722 ], 00:05:49.722 "dhchap_dhgroups": [ 00:05:49.722 "null", 00:05:49.722 "ffdhe2048", 00:05:49.722 "ffdhe3072", 00:05:49.722 "ffdhe4096", 00:05:49.722 "ffdhe6144", 00:05:49.722 "ffdhe8192" 00:05:49.722 ] 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "bdev_nvme_set_hotplug", 00:05:49.722 "params": { 00:05:49.722 "period_us": 100000, 00:05:49.722 "enable": false 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "bdev_wait_for_examine" 00:05:49.722 } 00:05:49.722 ] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "scsi", 00:05:49.722 "config": null 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "scheduler", 00:05:49.722 "config": [ 00:05:49.722 { 00:05:49.722 "method": "framework_set_scheduler", 00:05:49.722 "params": { 00:05:49.722 "name": "static" 00:05:49.722 } 00:05:49.722 } 00:05:49.722 ] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "vhost_scsi", 00:05:49.722 "config": [] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "vhost_blk", 00:05:49.722 "config": [] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "ublk", 00:05:49.722 "config": [] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "nbd", 00:05:49.722 "config": [] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "nvmf", 00:05:49.722 "config": [ 00:05:49.722 { 00:05:49.722 "method": "nvmf_set_config", 00:05:49.722 "params": { 00:05:49.722 "discovery_filter": "match_any", 00:05:49.722 "admin_cmd_passthru": { 00:05:49.722 "identify_ctrlr": false 00:05:49.722 } 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "nvmf_set_max_subsystems", 00:05:49.722 "params": { 00:05:49.722 "max_subsystems": 1024 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "nvmf_set_crdt", 00:05:49.722 "params": { 00:05:49.722 "crdt1": 0, 00:05:49.722 "crdt2": 0, 00:05:49.722 "crdt3": 0 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "nvmf_create_transport", 00:05:49.722 "params": { 00:05:49.722 "trtype": "TCP", 00:05:49.722 "max_queue_depth": 128, 00:05:49.722 "max_io_qpairs_per_ctrlr": 127, 00:05:49.722 "in_capsule_data_size": 4096, 00:05:49.722 "max_io_size": 131072, 00:05:49.722 "io_unit_size": 131072, 00:05:49.722 "max_aq_depth": 128, 00:05:49.722 "num_shared_buffers": 511, 00:05:49.722 "buf_cache_size": 4294967295, 00:05:49.722 "dif_insert_or_strip": false, 00:05:49.722 "zcopy": false, 00:05:49.722 "c2h_success": true, 00:05:49.722 "sock_priority": 0, 00:05:49.722 "abort_timeout_sec": 1, 00:05:49.722 "ack_timeout": 0, 00:05:49.722 "data_wr_pool_size": 0 00:05:49.722 } 00:05:49.722 } 00:05:49.722 ] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "iscsi", 00:05:49.722 "config": [ 00:05:49.722 { 00:05:49.722 "method": "iscsi_set_options", 00:05:49.722 "params": { 00:05:49.722 "node_base": "iqn.2016-06.io.spdk", 00:05:49.722 "max_sessions": 128, 00:05:49.722 "max_connections_per_session": 2, 00:05:49.722 "max_queue_depth": 64, 00:05:49.722 "default_time2wait": 2, 00:05:49.722 "default_time2retain": 20, 00:05:49.722 "first_burst_length": 8192, 00:05:49.722 "immediate_data": true, 00:05:49.722 "allow_duplicated_isid": false, 00:05:49.722 "error_recovery_level": 0, 00:05:49.722 "nop_timeout": 60, 00:05:49.722 "nop_in_interval": 30, 00:05:49.722 "disable_chap": false, 00:05:49.722 "require_chap": false, 00:05:49.722 "mutual_chap": false, 00:05:49.722 "chap_group": 0, 00:05:49.722 "max_large_datain_per_connection": 64, 00:05:49.722 "max_r2t_per_connection": 4, 00:05:49.722 "pdu_pool_size": 36864, 00:05:49.722 "immediate_data_pool_size": 16384, 00:05:49.722 "data_out_pool_size": 2048 00:05:49.722 } 00:05:49.722 } 00:05:49.722 ] 00:05:49.722 } 00:05:49.722 ] 00:05:49.722 } 00:05:49.722 20:11:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:49.722 20:11:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3918844 00:05:49.722 20:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3918844 ']' 00:05:49.722 20:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3918844 00:05:49.722 20:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:49.722 20:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.722 20:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3918844 00:05:49.723 20:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.723 20:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.723 20:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3918844' 00:05:49.723 killing process with pid 3918844 00:05:49.723 20:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3918844 00:05:49.723 20:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3918844 00:05:50.287 20:11:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3918990 00:05:50.287 20:11:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:50.287 20:11:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3918990 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3918990 ']' 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3918990 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3918990 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3918990' 00:05:55.551 killing process with pid 3918990 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3918990 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3918990 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:55.551 00:05:55.551 real 0m6.488s 00:05:55.551 user 0m6.174s 00:05:55.551 sys 0m0.717s 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.551 20:11:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.551 ************************************ 00:05:55.551 END TEST skip_rpc_with_json 00:05:55.551 ************************************ 00:05:55.551 20:11:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:55.551 20:11:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:55.551 20:11:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.551 20:11:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.551 20:11:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.551 ************************************ 00:05:55.551 START TEST skip_rpc_with_delay 00:05:55.551 ************************************ 00:05:55.551 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:55.551 20:11:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.551 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:55.551 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.551 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.551 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.551 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.551 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.551 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.551 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.551 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.551 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:55.551 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.551 [2024-07-15 20:11:34.080539] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:55.551 [2024-07-15 20:11:34.080665] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:55.809 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:55.809 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.809 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.809 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.809 00:05:55.809 real 0m0.076s 00:05:55.809 user 0m0.055s 00:05:55.809 sys 0m0.020s 00:05:55.809 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.809 20:11:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:55.809 ************************************ 00:05:55.809 END TEST skip_rpc_with_delay 00:05:55.809 ************************************ 00:05:55.809 20:11:34 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:55.809 20:11:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:55.809 20:11:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:55.809 20:11:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:55.809 20:11:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.809 20:11:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.809 20:11:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.809 ************************************ 00:05:55.809 START TEST exit_on_failed_rpc_init 00:05:55.809 ************************************ 00:05:55.809 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:55.809 20:11:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3919702 00:05:55.809 20:11:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.809 20:11:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3919702 00:05:55.809 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3919702 ']' 00:05:55.809 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.809 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.809 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.809 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.809 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.809 [2024-07-15 20:11:34.197140] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:05:55.810 [2024-07-15 20:11:34.197244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919702 ] 00:05:55.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.810 [2024-07-15 20:11:34.254249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.068 [2024-07-15 20:11:34.342762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:56.068 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.328 [2024-07-15 20:11:34.649090] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:05:56.328 [2024-07-15 20:11:34.649164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919713 ] 00:05:56.328 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.328 [2024-07-15 20:11:34.708924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.328 [2024-07-15 20:11:34.803233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.328 [2024-07-15 20:11:34.803374] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:56.328 [2024-07-15 20:11:34.803393] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:56.328 [2024-07-15 20:11:34.803405] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3919702 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3919702 ']' 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3919702 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3919702 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3919702' 00:05:56.586 killing process with pid 3919702 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3919702 00:05:56.586 20:11:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3919702 00:05:56.845 00:05:56.845 real 0m1.177s 00:05:56.845 user 0m1.295s 00:05:56.845 sys 0m0.450s 00:05:56.845 20:11:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.845 20:11:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.845 ************************************ 00:05:56.845 END TEST exit_on_failed_rpc_init 00:05:56.845 ************************************ 00:05:56.845 20:11:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.845 20:11:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.845 00:05:56.845 real 0m13.407s 00:05:56.845 user 0m12.734s 00:05:56.845 sys 0m1.663s 00:05:56.845 20:11:35 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.845 20:11:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.845 ************************************ 00:05:56.845 END TEST skip_rpc 00:05:56.845 ************************************ 00:05:56.845 20:11:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.846 20:11:35 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:56.846 20:11:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.846 20:11:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.846 20:11:35 -- common/autotest_common.sh@10 -- # set +x 00:05:57.111 ************************************ 00:05:57.111 START TEST rpc_client 00:05:57.111 ************************************ 00:05:57.111 20:11:35 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:57.111 * Looking for test storage... 00:05:57.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:57.111 20:11:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:57.111 OK 00:05:57.111 20:11:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:57.111 00:05:57.111 real 0m0.067s 00:05:57.111 user 0m0.028s 00:05:57.111 sys 0m0.043s 00:05:57.111 20:11:35 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.111 20:11:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:57.111 ************************************ 00:05:57.111 END TEST rpc_client 00:05:57.111 ************************************ 00:05:57.111 20:11:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.111 20:11:35 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:57.111 20:11:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.111 20:11:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.111 20:11:35 -- common/autotest_common.sh@10 -- # set +x 00:05:57.111 ************************************ 00:05:57.111 START TEST json_config 00:05:57.111 ************************************ 00:05:57.111 20:11:35 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:57.111 20:11:35 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.111 20:11:35 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.111 20:11:35 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.111 20:11:35 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.111 20:11:35 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.111 20:11:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.111 20:11:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.111 20:11:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.112 20:11:35 json_config -- paths/export.sh@5 -- # export PATH 00:05:57.112 20:11:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.112 20:11:35 json_config -- nvmf/common.sh@47 -- # : 0 00:05:57.112 20:11:35 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:57.112 20:11:35 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:57.112 20:11:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.112 20:11:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.112 20:11:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.112 20:11:35 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:57.112 20:11:35 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:57.112 20:11:35 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:57.112 INFO: JSON configuration test init 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:57.112 20:11:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.112 20:11:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:57.112 20:11:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.112 20:11:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.112 20:11:35 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:57.112 20:11:35 json_config -- json_config/common.sh@9 -- # local app=target 00:05:57.112 20:11:35 json_config -- json_config/common.sh@10 -- # shift 00:05:57.112 20:11:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.112 20:11:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.112 20:11:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.112 20:11:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.112 20:11:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.112 20:11:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3919950 00:05:57.112 20:11:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:57.112 20:11:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.112 Waiting for target to run... 00:05:57.112 20:11:35 json_config -- json_config/common.sh@25 -- # waitforlisten 3919950 /var/tmp/spdk_tgt.sock 00:05:57.112 20:11:35 json_config -- common/autotest_common.sh@829 -- # '[' -z 3919950 ']' 00:05:57.112 20:11:35 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.112 20:11:35 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.112 20:11:35 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.112 20:11:35 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.112 20:11:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.112 [2024-07-15 20:11:35.615639] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:05:57.112 [2024-07-15 20:11:35.615719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919950 ] 00:05:57.381 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.639 [2024-07-15 20:11:36.121333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.897 [2024-07-15 20:11:36.199407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.155 20:11:36 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.155 20:11:36 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:58.155 20:11:36 json_config -- json_config/common.sh@26 -- # echo '' 00:05:58.155 00:05:58.155 20:11:36 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:58.155 20:11:36 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:58.155 20:11:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.155 20:11:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.155 20:11:36 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:58.155 20:11:36 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:58.155 20:11:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.155 20:11:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.155 20:11:36 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:58.155 20:11:36 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:58.155 20:11:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:01.432 20:11:39 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:01.432 20:11:39 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:01.432 20:11:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.432 20:11:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.432 20:11:39 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:01.432 20:11:39 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:01.432 20:11:39 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:01.432 20:11:39 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:01.432 20:11:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:01.432 20:11:39 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:01.688 20:11:39 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:01.688 20:11:39 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:01.688 20:11:39 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:01.688 20:11:39 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:01.688 20:11:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.688 20:11:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.688 20:11:40 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:01.688 20:11:40 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:01.688 20:11:40 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:01.688 20:11:40 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:01.688 20:11:40 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:01.688 20:11:40 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:01.688 20:11:40 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:01.688 20:11:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.688 20:11:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.688 20:11:40 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:01.688 20:11:40 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:01.688 20:11:40 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:01.688 20:11:40 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:01.688 20:11:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:01.944 MallocForNvmf0 00:06:01.944 20:11:40 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:01.944 20:11:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:02.201 MallocForNvmf1 00:06:02.201 20:11:40 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:02.201 20:11:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:02.459 [2024-07-15 20:11:40.779273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.459 20:11:40 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:02.459 20:11:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:02.717 20:11:41 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:02.717 20:11:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:02.975 20:11:41 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:02.975 20:11:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:03.233 20:11:41 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:03.233 20:11:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:03.233 [2024-07-15 20:11:41.750418] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:03.490 20:11:41 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:03.490 20:11:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.490 20:11:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.490 20:11:41 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:03.491 20:11:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.491 20:11:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.491 20:11:41 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:03.491 20:11:41 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:03.491 20:11:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:03.748 MallocBdevForConfigChangeCheck 00:06:03.748 20:11:42 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:03.748 20:11:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.748 20:11:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.748 20:11:42 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:03.748 20:11:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.006 20:11:42 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:04.006 INFO: shutting down applications... 00:06:04.006 20:11:42 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:04.006 20:11:42 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:04.006 20:11:42 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:04.006 20:11:42 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:05.904 Calling clear_iscsi_subsystem 00:06:05.904 Calling clear_nvmf_subsystem 00:06:05.904 Calling clear_nbd_subsystem 00:06:05.904 Calling clear_ublk_subsystem 00:06:05.904 Calling clear_vhost_blk_subsystem 00:06:05.904 Calling clear_vhost_scsi_subsystem 00:06:05.904 Calling clear_bdev_subsystem 00:06:05.904 20:11:44 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:05.904 20:11:44 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:05.904 20:11:44 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:05.904 20:11:44 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.904 20:11:44 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:05.904 20:11:44 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:06.162 20:11:44 json_config -- json_config/json_config.sh@345 -- # break 00:06:06.162 20:11:44 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:06.162 20:11:44 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:06.162 20:11:44 json_config -- json_config/common.sh@31 -- # local app=target 00:06:06.162 20:11:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:06.162 20:11:44 json_config -- json_config/common.sh@35 -- # [[ -n 3919950 ]] 00:06:06.162 20:11:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3919950 00:06:06.162 20:11:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:06.162 20:11:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.162 20:11:44 json_config -- json_config/common.sh@41 -- # kill -0 3919950 00:06:06.162 20:11:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.731 20:11:45 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.731 20:11:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.731 20:11:45 json_config -- json_config/common.sh@41 -- # kill -0 3919950 00:06:06.731 20:11:45 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:06.731 20:11:45 json_config -- json_config/common.sh@43 -- # break 00:06:06.731 20:11:45 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:06.731 20:11:45 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:06.731 SPDK target shutdown done 00:06:06.731 20:11:45 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:06.731 INFO: relaunching applications... 00:06:06.731 20:11:45 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.731 20:11:45 json_config -- json_config/common.sh@9 -- # local app=target 00:06:06.731 20:11:45 json_config -- json_config/common.sh@10 -- # shift 00:06:06.731 20:11:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.731 20:11:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.731 20:11:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.731 20:11:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.731 20:11:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.731 20:11:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3921235 00:06:06.731 20:11:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.731 20:11:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.731 Waiting for target to run... 00:06:06.731 20:11:45 json_config -- json_config/common.sh@25 -- # waitforlisten 3921235 /var/tmp/spdk_tgt.sock 00:06:06.731 20:11:45 json_config -- common/autotest_common.sh@829 -- # '[' -z 3921235 ']' 00:06:06.731 20:11:45 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.731 20:11:45 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.731 20:11:45 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.731 20:11:45 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.731 20:11:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.731 [2024-07-15 20:11:45.107664] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:06.731 [2024-07-15 20:11:45.107754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921235 ] 00:06:06.731 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.299 [2024-07-15 20:11:45.644717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.299 [2024-07-15 20:11:45.725780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.614 [2024-07-15 20:11:48.760606] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.614 [2024-07-15 20:11:48.793074] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:11.179 20:11:49 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.179 20:11:49 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:11.179 20:11:49 json_config -- json_config/common.sh@26 -- # echo '' 00:06:11.179 00:06:11.179 20:11:49 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:11.179 20:11:49 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:11.179 INFO: Checking if target configuration is the same... 00:06:11.179 20:11:49 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.179 20:11:49 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:11.179 20:11:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.179 + '[' 2 -ne 2 ']' 00:06:11.179 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:11.179 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:11.179 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:11.179 +++ basename /dev/fd/62 00:06:11.179 ++ mktemp /tmp/62.XXX 00:06:11.179 + tmp_file_1=/tmp/62.DLk 00:06:11.179 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.179 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:11.179 + tmp_file_2=/tmp/spdk_tgt_config.json.Rkp 00:06:11.179 + ret=0 00:06:11.179 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.437 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.437 + diff -u /tmp/62.DLk /tmp/spdk_tgt_config.json.Rkp 00:06:11.437 + echo 'INFO: JSON config files are the same' 00:06:11.437 INFO: JSON config files are the same 00:06:11.437 + rm /tmp/62.DLk /tmp/spdk_tgt_config.json.Rkp 00:06:11.437 + exit 0 00:06:11.437 20:11:49 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:11.437 20:11:49 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:11.437 INFO: changing configuration and checking if this can be detected... 00:06:11.437 20:11:49 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:11.437 20:11:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:11.695 20:11:50 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.695 20:11:50 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:11.695 20:11:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.695 + '[' 2 -ne 2 ']' 00:06:11.695 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:11.695 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:11.695 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:11.695 +++ basename /dev/fd/62 00:06:11.695 ++ mktemp /tmp/62.XXX 00:06:11.695 + tmp_file_1=/tmp/62.PSt 00:06:11.695 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.695 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:11.695 + tmp_file_2=/tmp/spdk_tgt_config.json.5ok 00:06:11.695 + ret=0 00:06:11.952 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.210 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.210 + diff -u /tmp/62.PSt /tmp/spdk_tgt_config.json.5ok 00:06:12.210 + ret=1 00:06:12.210 + echo '=== Start of file: /tmp/62.PSt ===' 00:06:12.210 + cat /tmp/62.PSt 00:06:12.210 + echo '=== End of file: /tmp/62.PSt ===' 00:06:12.210 + echo '' 00:06:12.210 + echo '=== Start of file: /tmp/spdk_tgt_config.json.5ok ===' 00:06:12.210 + cat /tmp/spdk_tgt_config.json.5ok 00:06:12.210 + echo '=== End of file: /tmp/spdk_tgt_config.json.5ok ===' 00:06:12.210 + echo '' 00:06:12.210 + rm /tmp/62.PSt /tmp/spdk_tgt_config.json.5ok 00:06:12.210 + exit 1 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:12.210 INFO: configuration change detected. 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@317 -- # [[ -n 3921235 ]] 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.210 20:11:50 json_config -- json_config/json_config.sh@323 -- # killprocess 3921235 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@948 -- # '[' -z 3921235 ']' 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@952 -- # kill -0 3921235 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@953 -- # uname 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3921235 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3921235' 00:06:12.210 killing process with pid 3921235 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@967 -- # kill 3921235 00:06:12.210 20:11:50 json_config -- common/autotest_common.sh@972 -- # wait 3921235 00:06:14.107 20:11:52 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.107 20:11:52 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:14.107 20:11:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.107 20:11:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.107 20:11:52 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:14.107 20:11:52 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:14.107 INFO: Success 00:06:14.107 00:06:14.107 real 0m16.867s 00:06:14.107 user 0m18.664s 00:06:14.107 sys 0m2.262s 00:06:14.107 20:11:52 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.107 20:11:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.107 ************************************ 00:06:14.107 END TEST json_config 00:06:14.107 ************************************ 00:06:14.107 20:11:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.107 20:11:52 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:14.107 20:11:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.107 20:11:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.107 20:11:52 -- common/autotest_common.sh@10 -- # set +x 00:06:14.107 ************************************ 00:06:14.107 START TEST json_config_extra_key 00:06:14.107 ************************************ 00:06:14.107 20:11:52 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:14.107 20:11:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:14.107 20:11:52 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.107 20:11:52 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.107 20:11:52 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.107 20:11:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.107 20:11:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.107 20:11:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.107 20:11:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:14.107 20:11:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:14.107 20:11:52 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:14.108 20:11:52 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:14.108 20:11:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.108 20:11:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.108 20:11:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.108 20:11:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:14.108 20:11:52 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:14.108 20:11:52 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:14.108 20:11:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:14.108 20:11:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:14.108 20:11:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:14.108 20:11:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:14.108 20:11:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:14.108 20:11:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:14.108 20:11:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:14.108 20:11:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:14.108 20:11:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:14.108 20:11:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:14.108 20:11:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:14.108 INFO: launching applications... 00:06:14.108 20:11:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:14.108 20:11:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:14.108 20:11:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:14.108 20:11:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:14.108 20:11:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:14.108 20:11:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:14.108 20:11:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.108 20:11:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.108 20:11:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3922187 00:06:14.108 20:11:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:14.108 20:11:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:14.108 Waiting for target to run... 00:06:14.108 20:11:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3922187 /var/tmp/spdk_tgt.sock 00:06:14.108 20:11:52 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3922187 ']' 00:06:14.108 20:11:52 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:14.108 20:11:52 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.108 20:11:52 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:14.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:14.108 20:11:52 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.108 20:11:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:14.108 [2024-07-15 20:11:52.529525] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:14.108 [2024-07-15 20:11:52.529613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922187 ] 00:06:14.108 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.673 [2024-07-15 20:11:53.019936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.673 [2024-07-15 20:11:53.101932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.238 20:11:53 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.238 20:11:53 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:15.238 20:11:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:15.238 00:06:15.238 20:11:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:15.238 INFO: shutting down applications... 00:06:15.238 20:11:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:15.238 20:11:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:15.238 20:11:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:15.238 20:11:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3922187 ]] 00:06:15.238 20:11:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3922187 00:06:15.238 20:11:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:15.238 20:11:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.238 20:11:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3922187 00:06:15.238 20:11:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.495 20:11:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.495 20:11:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.495 20:11:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3922187 00:06:15.495 20:11:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:15.495 20:11:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:15.495 20:11:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:15.495 20:11:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:15.495 SPDK target shutdown done 00:06:15.495 20:11:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:15.495 Success 00:06:15.495 00:06:15.495 real 0m1.596s 00:06:15.495 user 0m1.450s 00:06:15.495 sys 0m0.585s 00:06:15.495 20:11:54 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.495 20:11:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:15.495 ************************************ 00:06:15.495 END TEST json_config_extra_key 00:06:15.495 ************************************ 00:06:15.753 20:11:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.753 20:11:54 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.753 20:11:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.753 20:11:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.753 20:11:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.753 ************************************ 00:06:15.753 START TEST alias_rpc 00:06:15.753 ************************************ 00:06:15.753 20:11:54 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.753 * Looking for test storage... 00:06:15.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:15.753 20:11:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:15.753 20:11:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3922495 00:06:15.753 20:11:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.753 20:11:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3922495 00:06:15.753 20:11:54 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3922495 ']' 00:06:15.753 20:11:54 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.753 20:11:54 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.753 20:11:54 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.753 20:11:54 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.753 20:11:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.753 [2024-07-15 20:11:54.173031] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:15.753 [2024-07-15 20:11:54.173113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922495 ] 00:06:15.753 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.753 [2024-07-15 20:11:54.230141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.010 [2024-07-15 20:11:54.314553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.266 20:11:54 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.267 20:11:54 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:16.267 20:11:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:16.524 20:11:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3922495 00:06:16.524 20:11:54 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3922495 ']' 00:06:16.524 20:11:54 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3922495 00:06:16.524 20:11:54 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:16.524 20:11:54 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.524 20:11:54 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3922495 00:06:16.524 20:11:54 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.524 20:11:54 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.524 20:11:54 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3922495' 00:06:16.524 killing process with pid 3922495 00:06:16.524 20:11:54 alias_rpc -- common/autotest_common.sh@967 -- # kill 3922495 00:06:16.524 20:11:54 alias_rpc -- common/autotest_common.sh@972 -- # wait 3922495 00:06:16.782 00:06:16.782 real 0m1.196s 00:06:16.782 user 0m1.281s 00:06:16.782 sys 0m0.410s 00:06:16.782 20:11:55 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.782 20:11:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.782 ************************************ 00:06:16.782 END TEST alias_rpc 00:06:16.782 ************************************ 00:06:16.782 20:11:55 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.782 20:11:55 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:16.782 20:11:55 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:16.782 20:11:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.782 20:11:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.782 20:11:55 -- common/autotest_common.sh@10 -- # set +x 00:06:17.040 ************************************ 00:06:17.040 START TEST spdkcli_tcp 00:06:17.040 ************************************ 00:06:17.040 20:11:55 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:17.040 * Looking for test storage... 00:06:17.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:17.040 20:11:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:17.040 20:11:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:17.040 20:11:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:17.040 20:11:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:17.040 20:11:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:17.040 20:11:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:17.040 20:11:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:17.040 20:11:55 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:17.040 20:11:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.040 20:11:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3922682 00:06:17.040 20:11:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:17.040 20:11:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3922682 00:06:17.040 20:11:55 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3922682 ']' 00:06:17.040 20:11:55 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.040 20:11:55 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.040 20:11:55 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.040 20:11:55 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.040 20:11:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.040 [2024-07-15 20:11:55.421622] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:17.040 [2024-07-15 20:11:55.421715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922682 ] 00:06:17.040 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.040 [2024-07-15 20:11:55.480062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.040 [2024-07-15 20:11:55.564180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.040 [2024-07-15 20:11:55.564184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.297 20:11:55 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.297 20:11:55 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:17.297 20:11:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3922692 00:06:17.297 20:11:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:17.297 20:11:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:17.554 [ 00:06:17.554 "bdev_malloc_delete", 00:06:17.554 "bdev_malloc_create", 00:06:17.554 "bdev_null_resize", 00:06:17.554 "bdev_null_delete", 00:06:17.554 "bdev_null_create", 00:06:17.554 "bdev_nvme_cuse_unregister", 00:06:17.554 "bdev_nvme_cuse_register", 00:06:17.554 "bdev_opal_new_user", 00:06:17.554 "bdev_opal_set_lock_state", 00:06:17.554 "bdev_opal_delete", 00:06:17.554 "bdev_opal_get_info", 00:06:17.554 "bdev_opal_create", 00:06:17.554 "bdev_nvme_opal_revert", 00:06:17.554 "bdev_nvme_opal_init", 00:06:17.554 "bdev_nvme_send_cmd", 00:06:17.554 "bdev_nvme_get_path_iostat", 00:06:17.554 "bdev_nvme_get_mdns_discovery_info", 00:06:17.554 "bdev_nvme_stop_mdns_discovery", 00:06:17.554 "bdev_nvme_start_mdns_discovery", 00:06:17.554 "bdev_nvme_set_multipath_policy", 00:06:17.554 "bdev_nvme_set_preferred_path", 00:06:17.554 "bdev_nvme_get_io_paths", 00:06:17.554 "bdev_nvme_remove_error_injection", 00:06:17.554 "bdev_nvme_add_error_injection", 00:06:17.554 "bdev_nvme_get_discovery_info", 00:06:17.554 "bdev_nvme_stop_discovery", 00:06:17.554 "bdev_nvme_start_discovery", 00:06:17.554 "bdev_nvme_get_controller_health_info", 00:06:17.554 "bdev_nvme_disable_controller", 00:06:17.554 "bdev_nvme_enable_controller", 00:06:17.554 "bdev_nvme_reset_controller", 00:06:17.554 "bdev_nvme_get_transport_statistics", 00:06:17.554 "bdev_nvme_apply_firmware", 00:06:17.554 "bdev_nvme_detach_controller", 00:06:17.554 "bdev_nvme_get_controllers", 00:06:17.554 "bdev_nvme_attach_controller", 00:06:17.554 "bdev_nvme_set_hotplug", 00:06:17.554 "bdev_nvme_set_options", 00:06:17.554 "bdev_passthru_delete", 00:06:17.554 "bdev_passthru_create", 00:06:17.554 "bdev_lvol_set_parent_bdev", 00:06:17.554 "bdev_lvol_set_parent", 00:06:17.554 "bdev_lvol_check_shallow_copy", 00:06:17.554 "bdev_lvol_start_shallow_copy", 00:06:17.554 "bdev_lvol_grow_lvstore", 00:06:17.554 "bdev_lvol_get_lvols", 00:06:17.554 "bdev_lvol_get_lvstores", 00:06:17.554 "bdev_lvol_delete", 00:06:17.554 "bdev_lvol_set_read_only", 00:06:17.554 "bdev_lvol_resize", 00:06:17.554 "bdev_lvol_decouple_parent", 00:06:17.554 "bdev_lvol_inflate", 00:06:17.554 "bdev_lvol_rename", 00:06:17.554 "bdev_lvol_clone_bdev", 00:06:17.554 "bdev_lvol_clone", 00:06:17.554 "bdev_lvol_snapshot", 00:06:17.554 "bdev_lvol_create", 00:06:17.554 "bdev_lvol_delete_lvstore", 00:06:17.555 "bdev_lvol_rename_lvstore", 00:06:17.555 "bdev_lvol_create_lvstore", 00:06:17.555 "bdev_raid_set_options", 00:06:17.555 "bdev_raid_remove_base_bdev", 00:06:17.555 "bdev_raid_add_base_bdev", 00:06:17.555 "bdev_raid_delete", 00:06:17.555 "bdev_raid_create", 00:06:17.555 "bdev_raid_get_bdevs", 00:06:17.555 "bdev_error_inject_error", 00:06:17.555 "bdev_error_delete", 00:06:17.555 "bdev_error_create", 00:06:17.555 "bdev_split_delete", 00:06:17.555 "bdev_split_create", 00:06:17.555 "bdev_delay_delete", 00:06:17.555 "bdev_delay_create", 00:06:17.555 "bdev_delay_update_latency", 00:06:17.555 "bdev_zone_block_delete", 00:06:17.555 "bdev_zone_block_create", 00:06:17.555 "blobfs_create", 00:06:17.555 "blobfs_detect", 00:06:17.555 "blobfs_set_cache_size", 00:06:17.555 "bdev_aio_delete", 00:06:17.555 "bdev_aio_rescan", 00:06:17.555 "bdev_aio_create", 00:06:17.555 "bdev_ftl_set_property", 00:06:17.555 "bdev_ftl_get_properties", 00:06:17.555 "bdev_ftl_get_stats", 00:06:17.555 "bdev_ftl_unmap", 00:06:17.555 "bdev_ftl_unload", 00:06:17.555 "bdev_ftl_delete", 00:06:17.555 "bdev_ftl_load", 00:06:17.555 "bdev_ftl_create", 00:06:17.555 "bdev_virtio_attach_controller", 00:06:17.555 "bdev_virtio_scsi_get_devices", 00:06:17.555 "bdev_virtio_detach_controller", 00:06:17.555 "bdev_virtio_blk_set_hotplug", 00:06:17.555 "bdev_iscsi_delete", 00:06:17.555 "bdev_iscsi_create", 00:06:17.555 "bdev_iscsi_set_options", 00:06:17.555 "accel_error_inject_error", 00:06:17.555 "ioat_scan_accel_module", 00:06:17.555 "dsa_scan_accel_module", 00:06:17.555 "iaa_scan_accel_module", 00:06:17.555 "vfu_virtio_create_scsi_endpoint", 00:06:17.555 "vfu_virtio_scsi_remove_target", 00:06:17.555 "vfu_virtio_scsi_add_target", 00:06:17.555 "vfu_virtio_create_blk_endpoint", 00:06:17.555 "vfu_virtio_delete_endpoint", 00:06:17.555 "keyring_file_remove_key", 00:06:17.555 "keyring_file_add_key", 00:06:17.555 "keyring_linux_set_options", 00:06:17.555 "iscsi_get_histogram", 00:06:17.555 "iscsi_enable_histogram", 00:06:17.555 "iscsi_set_options", 00:06:17.555 "iscsi_get_auth_groups", 00:06:17.555 "iscsi_auth_group_remove_secret", 00:06:17.555 "iscsi_auth_group_add_secret", 00:06:17.555 "iscsi_delete_auth_group", 00:06:17.555 "iscsi_create_auth_group", 00:06:17.555 "iscsi_set_discovery_auth", 00:06:17.555 "iscsi_get_options", 00:06:17.555 "iscsi_target_node_request_logout", 00:06:17.555 "iscsi_target_node_set_redirect", 00:06:17.555 "iscsi_target_node_set_auth", 00:06:17.555 "iscsi_target_node_add_lun", 00:06:17.555 "iscsi_get_stats", 00:06:17.555 "iscsi_get_connections", 00:06:17.555 "iscsi_portal_group_set_auth", 00:06:17.555 "iscsi_start_portal_group", 00:06:17.555 "iscsi_delete_portal_group", 00:06:17.555 "iscsi_create_portal_group", 00:06:17.555 "iscsi_get_portal_groups", 00:06:17.555 "iscsi_delete_target_node", 00:06:17.555 "iscsi_target_node_remove_pg_ig_maps", 00:06:17.555 "iscsi_target_node_add_pg_ig_maps", 00:06:17.555 "iscsi_create_target_node", 00:06:17.555 "iscsi_get_target_nodes", 00:06:17.555 "iscsi_delete_initiator_group", 00:06:17.555 "iscsi_initiator_group_remove_initiators", 00:06:17.555 "iscsi_initiator_group_add_initiators", 00:06:17.555 "iscsi_create_initiator_group", 00:06:17.555 "iscsi_get_initiator_groups", 00:06:17.555 "nvmf_set_crdt", 00:06:17.555 "nvmf_set_config", 00:06:17.555 "nvmf_set_max_subsystems", 00:06:17.555 "nvmf_stop_mdns_prr", 00:06:17.555 "nvmf_publish_mdns_prr", 00:06:17.555 "nvmf_subsystem_get_listeners", 00:06:17.555 "nvmf_subsystem_get_qpairs", 00:06:17.555 "nvmf_subsystem_get_controllers", 00:06:17.555 "nvmf_get_stats", 00:06:17.555 "nvmf_get_transports", 00:06:17.555 "nvmf_create_transport", 00:06:17.555 "nvmf_get_targets", 00:06:17.555 "nvmf_delete_target", 00:06:17.555 "nvmf_create_target", 00:06:17.555 "nvmf_subsystem_allow_any_host", 00:06:17.555 "nvmf_subsystem_remove_host", 00:06:17.555 "nvmf_subsystem_add_host", 00:06:17.555 "nvmf_ns_remove_host", 00:06:17.555 "nvmf_ns_add_host", 00:06:17.555 "nvmf_subsystem_remove_ns", 00:06:17.555 "nvmf_subsystem_add_ns", 00:06:17.555 "nvmf_subsystem_listener_set_ana_state", 00:06:17.555 "nvmf_discovery_get_referrals", 00:06:17.555 "nvmf_discovery_remove_referral", 00:06:17.555 "nvmf_discovery_add_referral", 00:06:17.555 "nvmf_subsystem_remove_listener", 00:06:17.555 "nvmf_subsystem_add_listener", 00:06:17.555 "nvmf_delete_subsystem", 00:06:17.555 "nvmf_create_subsystem", 00:06:17.555 "nvmf_get_subsystems", 00:06:17.555 "env_dpdk_get_mem_stats", 00:06:17.555 "nbd_get_disks", 00:06:17.555 "nbd_stop_disk", 00:06:17.555 "nbd_start_disk", 00:06:17.555 "ublk_recover_disk", 00:06:17.555 "ublk_get_disks", 00:06:17.555 "ublk_stop_disk", 00:06:17.555 "ublk_start_disk", 00:06:17.555 "ublk_destroy_target", 00:06:17.555 "ublk_create_target", 00:06:17.555 "virtio_blk_create_transport", 00:06:17.555 "virtio_blk_get_transports", 00:06:17.555 "vhost_controller_set_coalescing", 00:06:17.555 "vhost_get_controllers", 00:06:17.555 "vhost_delete_controller", 00:06:17.555 "vhost_create_blk_controller", 00:06:17.555 "vhost_scsi_controller_remove_target", 00:06:17.555 "vhost_scsi_controller_add_target", 00:06:17.555 "vhost_start_scsi_controller", 00:06:17.555 "vhost_create_scsi_controller", 00:06:17.555 "thread_set_cpumask", 00:06:17.555 "framework_get_governor", 00:06:17.555 "framework_get_scheduler", 00:06:17.555 "framework_set_scheduler", 00:06:17.555 "framework_get_reactors", 00:06:17.555 "thread_get_io_channels", 00:06:17.555 "thread_get_pollers", 00:06:17.555 "thread_get_stats", 00:06:17.555 "framework_monitor_context_switch", 00:06:17.555 "spdk_kill_instance", 00:06:17.555 "log_enable_timestamps", 00:06:17.555 "log_get_flags", 00:06:17.555 "log_clear_flag", 00:06:17.555 "log_set_flag", 00:06:17.555 "log_get_level", 00:06:17.555 "log_set_level", 00:06:17.555 "log_get_print_level", 00:06:17.555 "log_set_print_level", 00:06:17.555 "framework_enable_cpumask_locks", 00:06:17.555 "framework_disable_cpumask_locks", 00:06:17.555 "framework_wait_init", 00:06:17.555 "framework_start_init", 00:06:17.555 "scsi_get_devices", 00:06:17.555 "bdev_get_histogram", 00:06:17.555 "bdev_enable_histogram", 00:06:17.555 "bdev_set_qos_limit", 00:06:17.555 "bdev_set_qd_sampling_period", 00:06:17.555 "bdev_get_bdevs", 00:06:17.555 "bdev_reset_iostat", 00:06:17.555 "bdev_get_iostat", 00:06:17.555 "bdev_examine", 00:06:17.555 "bdev_wait_for_examine", 00:06:17.555 "bdev_set_options", 00:06:17.555 "notify_get_notifications", 00:06:17.555 "notify_get_types", 00:06:17.555 "accel_get_stats", 00:06:17.555 "accel_set_options", 00:06:17.555 "accel_set_driver", 00:06:17.555 "accel_crypto_key_destroy", 00:06:17.555 "accel_crypto_keys_get", 00:06:17.555 "accel_crypto_key_create", 00:06:17.555 "accel_assign_opc", 00:06:17.555 "accel_get_module_info", 00:06:17.555 "accel_get_opc_assignments", 00:06:17.555 "vmd_rescan", 00:06:17.555 "vmd_remove_device", 00:06:17.555 "vmd_enable", 00:06:17.555 "sock_get_default_impl", 00:06:17.555 "sock_set_default_impl", 00:06:17.555 "sock_impl_set_options", 00:06:17.555 "sock_impl_get_options", 00:06:17.555 "iobuf_get_stats", 00:06:17.555 "iobuf_set_options", 00:06:17.555 "keyring_get_keys", 00:06:17.555 "framework_get_pci_devices", 00:06:17.555 "framework_get_config", 00:06:17.555 "framework_get_subsystems", 00:06:17.555 "vfu_tgt_set_base_path", 00:06:17.555 "trace_get_info", 00:06:17.555 "trace_get_tpoint_group_mask", 00:06:17.555 "trace_disable_tpoint_group", 00:06:17.555 "trace_enable_tpoint_group", 00:06:17.555 "trace_clear_tpoint_mask", 00:06:17.555 "trace_set_tpoint_mask", 00:06:17.555 "spdk_get_version", 00:06:17.555 "rpc_get_methods" 00:06:17.555 ] 00:06:17.555 20:11:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:17.555 20:11:56 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.555 20:11:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.812 20:11:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:17.812 20:11:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3922682 00:06:17.812 20:11:56 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3922682 ']' 00:06:17.812 20:11:56 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3922682 00:06:17.812 20:11:56 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:17.812 20:11:56 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.812 20:11:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3922682 00:06:17.812 20:11:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.813 20:11:56 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.813 20:11:56 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3922682' 00:06:17.813 killing process with pid 3922682 00:06:17.813 20:11:56 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3922682 00:06:17.813 20:11:56 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3922682 00:06:18.070 00:06:18.070 real 0m1.200s 00:06:18.070 user 0m2.138s 00:06:18.070 sys 0m0.441s 00:06:18.070 20:11:56 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.070 20:11:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.070 ************************************ 00:06:18.070 END TEST spdkcli_tcp 00:06:18.070 ************************************ 00:06:18.070 20:11:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:18.070 20:11:56 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.070 20:11:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.070 20:11:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.070 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:06:18.070 ************************************ 00:06:18.070 START TEST dpdk_mem_utility 00:06:18.070 ************************************ 00:06:18.070 20:11:56 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.327 * Looking for test storage... 00:06:18.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:18.327 20:11:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:18.327 20:11:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3922882 00:06:18.327 20:11:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.327 20:11:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3922882 00:06:18.327 20:11:56 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3922882 ']' 00:06:18.327 20:11:56 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.327 20:11:56 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.327 20:11:56 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.327 20:11:56 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.327 20:11:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.327 [2024-07-15 20:11:56.675829] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:18.327 [2024-07-15 20:11:56.675947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922882 ] 00:06:18.327 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.327 [2024-07-15 20:11:56.733454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.327 [2024-07-15 20:11:56.817291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.585 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.585 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:18.585 20:11:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:18.585 20:11:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:18.585 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.585 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.585 { 00:06:18.585 "filename": "/tmp/spdk_mem_dump.txt" 00:06:18.585 } 00:06:18.585 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.585 20:11:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:18.843 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:18.843 1 heaps totaling size 814.000000 MiB 00:06:18.843 size: 814.000000 MiB heap id: 0 00:06:18.843 end heaps---------- 00:06:18.843 8 mempools totaling size 598.116089 MiB 00:06:18.843 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:18.843 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:18.843 size: 84.521057 MiB name: bdev_io_3922882 00:06:18.843 size: 51.011292 MiB name: evtpool_3922882 00:06:18.843 size: 50.003479 MiB name: msgpool_3922882 00:06:18.843 size: 21.763794 MiB name: PDU_Pool 00:06:18.843 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:18.843 size: 0.026123 MiB name: Session_Pool 00:06:18.843 end mempools------- 00:06:18.843 6 memzones totaling size 4.142822 MiB 00:06:18.843 size: 1.000366 MiB name: RG_ring_0_3922882 00:06:18.843 size: 1.000366 MiB name: RG_ring_1_3922882 00:06:18.843 size: 1.000366 MiB name: RG_ring_4_3922882 00:06:18.843 size: 1.000366 MiB name: RG_ring_5_3922882 00:06:18.843 size: 0.125366 MiB name: RG_ring_2_3922882 00:06:18.843 size: 0.015991 MiB name: RG_ring_3_3922882 00:06:18.843 end memzones------- 00:06:18.843 20:11:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:18.843 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:18.843 list of free elements. size: 12.519348 MiB 00:06:18.843 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:18.843 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:18.843 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:18.843 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:18.843 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:18.843 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:18.843 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:18.843 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:18.843 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:18.843 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:18.843 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:18.843 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:18.843 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:18.843 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:18.843 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:18.843 list of standard malloc elements. size: 199.218079 MiB 00:06:18.843 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:18.843 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:18.843 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:18.843 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:18.843 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:18.843 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:18.843 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:18.843 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:18.843 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:18.843 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:18.843 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:18.843 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:18.843 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:18.843 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:18.843 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:18.843 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:18.843 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:18.843 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:18.843 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:18.843 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:18.843 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:18.843 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:18.843 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:18.843 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:18.843 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:18.843 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:18.843 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:18.843 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:18.843 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:18.843 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:18.843 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:18.843 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:18.843 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:18.843 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:18.843 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:18.843 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:18.843 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:18.843 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:18.843 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:18.843 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:18.843 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:18.843 list of memzone associated elements. size: 602.262573 MiB 00:06:18.843 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:18.843 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:18.843 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:18.843 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:18.843 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:18.843 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3922882_0 00:06:18.843 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:18.843 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3922882_0 00:06:18.843 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:18.843 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3922882_0 00:06:18.843 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:18.843 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:18.843 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:18.843 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:18.843 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:18.843 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3922882 00:06:18.843 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:18.843 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3922882 00:06:18.843 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:18.843 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3922882 00:06:18.843 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:18.843 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:18.843 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:18.843 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:18.843 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:18.843 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:18.843 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:18.843 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:18.843 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:18.843 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3922882 00:06:18.843 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:18.843 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3922882 00:06:18.843 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:18.843 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3922882 00:06:18.843 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:18.843 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3922882 00:06:18.843 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:18.843 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3922882 00:06:18.843 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:18.843 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:18.843 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:18.843 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:18.843 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:18.843 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:18.843 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:18.843 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3922882 00:06:18.843 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:18.843 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:18.843 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:18.843 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:18.843 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:18.843 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3922882 00:06:18.843 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:18.843 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:18.843 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:18.843 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3922882 00:06:18.843 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:18.843 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3922882 00:06:18.843 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:18.843 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:18.843 20:11:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:18.843 20:11:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3922882 00:06:18.843 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3922882 ']' 00:06:18.843 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3922882 00:06:18.844 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:18.844 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.844 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3922882 00:06:18.844 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.844 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.844 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3922882' 00:06:18.844 killing process with pid 3922882 00:06:18.844 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3922882 00:06:18.844 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3922882 00:06:19.101 00:06:19.101 real 0m1.048s 00:06:19.101 user 0m0.993s 00:06:19.101 sys 0m0.421s 00:06:19.101 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.101 20:11:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.101 ************************************ 00:06:19.101 END TEST dpdk_mem_utility 00:06:19.101 ************************************ 00:06:19.359 20:11:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.359 20:11:57 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:19.359 20:11:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.359 20:11:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.359 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.359 ************************************ 00:06:19.359 START TEST event 00:06:19.359 ************************************ 00:06:19.359 20:11:57 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:19.359 * Looking for test storage... 00:06:19.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:19.359 20:11:57 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:19.359 20:11:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:19.359 20:11:57 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:19.359 20:11:57 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:19.359 20:11:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.359 20:11:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.359 ************************************ 00:06:19.359 START TEST event_perf 00:06:19.359 ************************************ 00:06:19.359 20:11:57 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:19.359 Running I/O for 1 seconds...[2024-07-15 20:11:57.755284] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:19.359 [2024-07-15 20:11:57.755348] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923072 ] 00:06:19.359 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.359 [2024-07-15 20:11:57.817111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.618 [2024-07-15 20:11:57.912116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.618 [2024-07-15 20:11:57.912170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.618 [2024-07-15 20:11:57.912301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.618 [2024-07-15 20:11:57.912302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.553 Running I/O for 1 seconds... 00:06:20.553 lcore 0: 234611 00:06:20.553 lcore 1: 234610 00:06:20.553 lcore 2: 234609 00:06:20.553 lcore 3: 234610 00:06:20.553 done. 00:06:20.553 00:06:20.553 real 0m1.254s 00:06:20.553 user 0m4.167s 00:06:20.553 sys 0m0.082s 00:06:20.553 20:11:58 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.553 20:11:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.553 ************************************ 00:06:20.553 END TEST event_perf 00:06:20.553 ************************************ 00:06:20.553 20:11:59 event -- common/autotest_common.sh@1142 -- # return 0 00:06:20.553 20:11:59 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:20.553 20:11:59 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:20.553 20:11:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.553 20:11:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.553 ************************************ 00:06:20.553 START TEST event_reactor 00:06:20.553 ************************************ 00:06:20.553 20:11:59 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:20.553 [2024-07-15 20:11:59.056249] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:20.553 [2024-07-15 20:11:59.056317] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923233 ] 00:06:20.553 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.811 [2024-07-15 20:11:59.118891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.811 [2024-07-15 20:11:59.209497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.183 test_start 00:06:22.183 oneshot 00:06:22.183 tick 100 00:06:22.183 tick 100 00:06:22.183 tick 250 00:06:22.183 tick 100 00:06:22.183 tick 100 00:06:22.183 tick 100 00:06:22.183 tick 250 00:06:22.183 tick 500 00:06:22.183 tick 100 00:06:22.183 tick 100 00:06:22.183 tick 250 00:06:22.183 tick 100 00:06:22.183 tick 100 00:06:22.183 test_end 00:06:22.183 00:06:22.183 real 0m1.244s 00:06:22.183 user 0m1.153s 00:06:22.183 sys 0m0.086s 00:06:22.183 20:12:00 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.183 20:12:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:22.183 ************************************ 00:06:22.183 END TEST event_reactor 00:06:22.183 ************************************ 00:06:22.183 20:12:00 event -- common/autotest_common.sh@1142 -- # return 0 00:06:22.183 20:12:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:22.183 20:12:00 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:22.183 20:12:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.183 20:12:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.183 ************************************ 00:06:22.183 START TEST event_reactor_perf 00:06:22.183 ************************************ 00:06:22.183 20:12:00 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:22.183 [2024-07-15 20:12:00.341351] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:22.183 [2024-07-15 20:12:00.341412] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923423 ] 00:06:22.183 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.183 [2024-07-15 20:12:00.402421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.183 [2024-07-15 20:12:00.504383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.117 test_start 00:06:23.117 test_end 00:06:23.117 Performance: 358630 events per second 00:06:23.117 00:06:23.117 real 0m1.253s 00:06:23.117 user 0m1.164s 00:06:23.117 sys 0m0.083s 00:06:23.117 20:12:01 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.117 20:12:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.117 ************************************ 00:06:23.117 END TEST event_reactor_perf 00:06:23.117 ************************************ 00:06:23.117 20:12:01 event -- common/autotest_common.sh@1142 -- # return 0 00:06:23.117 20:12:01 event -- event/event.sh@49 -- # uname -s 00:06:23.117 20:12:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:23.117 20:12:01 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:23.117 20:12:01 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.117 20:12:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.117 20:12:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.117 ************************************ 00:06:23.117 START TEST event_scheduler 00:06:23.117 ************************************ 00:06:23.117 20:12:01 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:23.375 * Looking for test storage... 00:06:23.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:23.375 20:12:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:23.375 20:12:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3923679 00:06:23.375 20:12:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:23.375 20:12:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.375 20:12:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3923679 00:06:23.375 20:12:01 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3923679 ']' 00:06:23.375 20:12:01 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.375 20:12:01 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.375 20:12:01 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.375 20:12:01 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.375 20:12:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.375 [2024-07-15 20:12:01.720108] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:23.375 [2024-07-15 20:12:01.720204] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923679 ] 00:06:23.375 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.375 [2024-07-15 20:12:01.786190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.375 [2024-07-15 20:12:01.877711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.375 [2024-07-15 20:12:01.877763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.375 [2024-07-15 20:12:01.877760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.376 [2024-07-15 20:12:01.877737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.634 20:12:01 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.634 20:12:01 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:23.634 20:12:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:23.634 20:12:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.634 20:12:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.634 [2024-07-15 20:12:01.954629] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:23.634 [2024-07-15 20:12:01.954655] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:23.634 [2024-07-15 20:12:01.954670] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:23.634 [2024-07-15 20:12:01.954680] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:23.634 [2024-07-15 20:12:01.954690] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:23.634 20:12:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.634 20:12:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:23.635 20:12:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.635 20:12:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.635 [2024-07-15 20:12:02.046971] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:23.635 20:12:02 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.635 20:12:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:23.635 20:12:02 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.635 20:12:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.635 20:12:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.635 ************************************ 00:06:23.635 START TEST scheduler_create_thread 00:06:23.635 ************************************ 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.635 2 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.635 3 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.635 4 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.635 5 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.635 6 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.635 7 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.635 8 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.635 9 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.635 10 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.635 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.918 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.919 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:23.919 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.919 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.919 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.919 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:23.919 20:12:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:23.919 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.919 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.177 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.177 00:06:24.177 real 0m0.592s 00:06:24.177 user 0m0.010s 00:06:24.177 sys 0m0.007s 00:06:24.177 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.177 20:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.177 ************************************ 00:06:24.177 END TEST scheduler_create_thread 00:06:24.177 ************************************ 00:06:24.177 20:12:02 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:24.177 20:12:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:24.177 20:12:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3923679 00:06:24.177 20:12:02 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3923679 ']' 00:06:24.177 20:12:02 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3923679 00:06:24.177 20:12:02 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:24.177 20:12:02 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.177 20:12:02 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3923679 00:06:24.435 20:12:02 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:24.435 20:12:02 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:24.435 20:12:02 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3923679' 00:06:24.435 killing process with pid 3923679 00:06:24.435 20:12:02 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3923679 00:06:24.435 20:12:02 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3923679 00:06:24.692 [2024-07-15 20:12:03.147099] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:24.949 00:06:24.949 real 0m1.729s 00:06:24.949 user 0m2.294s 00:06:24.949 sys 0m0.309s 00:06:24.949 20:12:03 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.949 20:12:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.949 ************************************ 00:06:24.949 END TEST event_scheduler 00:06:24.949 ************************************ 00:06:24.949 20:12:03 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.949 20:12:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:24.949 20:12:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:24.949 20:12:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.949 20:12:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.949 20:12:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.949 ************************************ 00:06:24.949 START TEST app_repeat 00:06:24.949 ************************************ 00:06:24.949 20:12:03 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:24.949 20:12:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.949 20:12:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.949 20:12:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:24.949 20:12:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.949 20:12:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:24.950 20:12:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:24.950 20:12:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:24.950 20:12:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3923996 00:06:24.950 20:12:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:24.950 20:12:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.950 20:12:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3923996' 00:06:24.950 Process app_repeat pid: 3923996 00:06:24.950 20:12:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.950 20:12:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:24.950 spdk_app_start Round 0 00:06:24.950 20:12:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3923996 /var/tmp/spdk-nbd.sock 00:06:24.950 20:12:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3923996 ']' 00:06:24.950 20:12:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.950 20:12:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.950 20:12:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.950 20:12:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.950 20:12:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.950 [2024-07-15 20:12:03.438773] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:24.950 [2024-07-15 20:12:03.438835] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923996 ] 00:06:24.950 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.208 [2024-07-15 20:12:03.500835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.208 [2024-07-15 20:12:03.591320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.208 [2024-07-15 20:12:03.591326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.208 20:12:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.208 20:12:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:25.208 20:12:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.466 Malloc0 00:06:25.466 20:12:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.035 Malloc1 00:06:26.035 20:12:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.035 /dev/nbd0 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.035 20:12:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.035 20:12:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:26.035 20:12:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:26.035 20:12:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:26.035 20:12:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:26.035 20:12:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:26.035 20:12:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:26.035 20:12:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:26.035 20:12:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:26.035 20:12:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.035 1+0 records in 00:06:26.035 1+0 records out 00:06:26.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173315 s, 23.6 MB/s 00:06:26.035 20:12:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.293 20:12:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:26.293 20:12:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.293 20:12:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:26.293 20:12:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:26.293 20:12:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.293 20:12:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.293 20:12:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.293 /dev/nbd1 00:06:26.293 20:12:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.293 20:12:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.293 20:12:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:26.293 20:12:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:26.293 20:12:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:26.293 20:12:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:26.293 20:12:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:26.551 20:12:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:26.551 20:12:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:26.551 20:12:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:26.551 20:12:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.551 1+0 records in 00:06:26.551 1+0 records out 00:06:26.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002105 s, 19.5 MB/s 00:06:26.551 20:12:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.551 20:12:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:26.551 20:12:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.551 20:12:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:26.551 20:12:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:26.551 20:12:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.551 20:12:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.551 20:12:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.551 20:12:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.551 20:12:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.809 { 00:06:26.809 "nbd_device": "/dev/nbd0", 00:06:26.809 "bdev_name": "Malloc0" 00:06:26.809 }, 00:06:26.809 { 00:06:26.809 "nbd_device": "/dev/nbd1", 00:06:26.809 "bdev_name": "Malloc1" 00:06:26.809 } 00:06:26.809 ]' 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.809 { 00:06:26.809 "nbd_device": "/dev/nbd0", 00:06:26.809 "bdev_name": "Malloc0" 00:06:26.809 }, 00:06:26.809 { 00:06:26.809 "nbd_device": "/dev/nbd1", 00:06:26.809 "bdev_name": "Malloc1" 00:06:26.809 } 00:06:26.809 ]' 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.809 /dev/nbd1' 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.809 /dev/nbd1' 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.809 256+0 records in 00:06:26.809 256+0 records out 00:06:26.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502142 s, 209 MB/s 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.809 256+0 records in 00:06:26.809 256+0 records out 00:06:26.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211963 s, 49.5 MB/s 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.809 256+0 records in 00:06:26.809 256+0 records out 00:06:26.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247178 s, 42.4 MB/s 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.809 20:12:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.067 20:12:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.067 20:12:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.067 20:12:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.068 20:12:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.068 20:12:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.068 20:12:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.068 20:12:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.068 20:12:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.068 20:12:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.068 20:12:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.326 20:12:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.326 20:12:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.326 20:12:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.326 20:12:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.326 20:12:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.326 20:12:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.326 20:12:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.326 20:12:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.326 20:12:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.326 20:12:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.326 20:12:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.584 20:12:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.584 20:12:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.584 20:12:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.584 20:12:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.584 20:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.584 20:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.584 20:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.584 20:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.584 20:12:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.584 20:12:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.584 20:12:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.584 20:12:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.584 20:12:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.842 20:12:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.099 [2024-07-15 20:12:06.584833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.357 [2024-07-15 20:12:06.674960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.357 [2024-07-15 20:12:06.674965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.357 [2024-07-15 20:12:06.736261] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.358 [2024-07-15 20:12:06.736340] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.883 20:12:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:30.883 20:12:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:30.883 spdk_app_start Round 1 00:06:30.883 20:12:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3923996 /var/tmp/spdk-nbd.sock 00:06:30.883 20:12:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3923996 ']' 00:06:30.883 20:12:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.883 20:12:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.883 20:12:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.883 20:12:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.883 20:12:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.141 20:12:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.141 20:12:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:31.141 20:12:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.399 Malloc0 00:06:31.399 20:12:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.657 Malloc1 00:06:31.657 20:12:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.657 20:12:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.915 /dev/nbd0 00:06:31.915 20:12:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.915 20:12:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.915 20:12:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:31.915 20:12:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:31.915 20:12:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:31.915 20:12:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:31.915 20:12:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:31.915 20:12:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:31.915 20:12:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:31.915 20:12:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:31.915 20:12:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.172 1+0 records in 00:06:32.172 1+0 records out 00:06:32.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181301 s, 22.6 MB/s 00:06:32.172 20:12:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.172 20:12:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:32.172 20:12:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.172 20:12:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.172 20:12:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:32.172 20:12:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.173 20:12:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.173 20:12:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.173 /dev/nbd1 00:06:32.173 20:12:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.431 20:12:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.431 20:12:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:32.431 20:12:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:32.431 20:12:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:32.431 20:12:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:32.431 20:12:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:32.431 20:12:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:32.431 20:12:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:32.431 20:12:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:32.431 20:12:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.431 1+0 records in 00:06:32.431 1+0 records out 00:06:32.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238499 s, 17.2 MB/s 00:06:32.432 20:12:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.432 20:12:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:32.432 20:12:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.432 20:12:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.432 20:12:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:32.432 20:12:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.432 20:12:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.432 20:12:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.432 20:12:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.432 20:12:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.689 20:12:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.689 { 00:06:32.689 "nbd_device": "/dev/nbd0", 00:06:32.689 "bdev_name": "Malloc0" 00:06:32.689 }, 00:06:32.689 { 00:06:32.689 "nbd_device": "/dev/nbd1", 00:06:32.689 "bdev_name": "Malloc1" 00:06:32.690 } 00:06:32.690 ]' 00:06:32.690 20:12:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.690 { 00:06:32.690 "nbd_device": "/dev/nbd0", 00:06:32.690 "bdev_name": "Malloc0" 00:06:32.690 }, 00:06:32.690 { 00:06:32.690 "nbd_device": "/dev/nbd1", 00:06:32.690 "bdev_name": "Malloc1" 00:06:32.690 } 00:06:32.690 ]' 00:06:32.690 20:12:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.690 /dev/nbd1' 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.690 /dev/nbd1' 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.690 256+0 records in 00:06:32.690 256+0 records out 00:06:32.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495278 s, 212 MB/s 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.690 256+0 records in 00:06:32.690 256+0 records out 00:06:32.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239827 s, 43.7 MB/s 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.690 256+0 records in 00:06:32.690 256+0 records out 00:06:32.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02291 s, 45.8 MB/s 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.690 20:12:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.947 20:12:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.947 20:12:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.947 20:12:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.947 20:12:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.947 20:12:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.947 20:12:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.947 20:12:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.947 20:12:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.948 20:12:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.948 20:12:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.204 20:12:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.204 20:12:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.204 20:12:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.204 20:12:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.204 20:12:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.204 20:12:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.204 20:12:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.204 20:12:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.204 20:12:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.204 20:12:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.204 20:12:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.462 20:12:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.462 20:12:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.462 20:12:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.462 20:12:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.462 20:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.462 20:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.462 20:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.462 20:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.462 20:12:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.462 20:12:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.462 20:12:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.462 20:12:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.462 20:12:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.719 20:12:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.976 [2024-07-15 20:12:12.448564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.233 [2024-07-15 20:12:12.539070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.233 [2024-07-15 20:12:12.539076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.233 [2024-07-15 20:12:12.601584] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.233 [2024-07-15 20:12:12.601663] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.754 20:12:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.754 20:12:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:36.754 spdk_app_start Round 2 00:06:36.754 20:12:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3923996 /var/tmp/spdk-nbd.sock 00:06:36.754 20:12:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3923996 ']' 00:06:36.754 20:12:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.754 20:12:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.754 20:12:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.754 20:12:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.754 20:12:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.014 20:12:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.014 20:12:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:37.014 20:12:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.272 Malloc0 00:06:37.272 20:12:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.530 Malloc1 00:06:37.530 20:12:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.530 20:12:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.788 /dev/nbd0 00:06:37.788 20:12:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.788 20:12:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.788 1+0 records in 00:06:37.788 1+0 records out 00:06:37.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020669 s, 19.8 MB/s 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.788 20:12:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:37.788 20:12:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.788 20:12:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.788 20:12:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.046 /dev/nbd1 00:06:38.046 20:12:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.046 20:12:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.046 1+0 records in 00:06:38.046 1+0 records out 00:06:38.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199712 s, 20.5 MB/s 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.046 20:12:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:38.046 20:12:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.046 20:12:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.046 20:12:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.046 20:12:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.046 20:12:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.304 20:12:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.304 { 00:06:38.304 "nbd_device": "/dev/nbd0", 00:06:38.304 "bdev_name": "Malloc0" 00:06:38.304 }, 00:06:38.304 { 00:06:38.304 "nbd_device": "/dev/nbd1", 00:06:38.304 "bdev_name": "Malloc1" 00:06:38.304 } 00:06:38.304 ]' 00:06:38.304 20:12:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.304 { 00:06:38.304 "nbd_device": "/dev/nbd0", 00:06:38.304 "bdev_name": "Malloc0" 00:06:38.304 }, 00:06:38.304 { 00:06:38.304 "nbd_device": "/dev/nbd1", 00:06:38.304 "bdev_name": "Malloc1" 00:06:38.304 } 00:06:38.304 ]' 00:06:38.304 20:12:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:38.562 /dev/nbd1' 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.562 /dev/nbd1' 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:38.562 256+0 records in 00:06:38.562 256+0 records out 00:06:38.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500901 s, 209 MB/s 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.562 256+0 records in 00:06:38.562 256+0 records out 00:06:38.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020495 s, 51.2 MB/s 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.562 256+0 records in 00:06:38.562 256+0 records out 00:06:38.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247588 s, 42.4 MB/s 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.562 20:12:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.849 20:12:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.849 20:12:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.849 20:12:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.849 20:12:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.849 20:12:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.849 20:12:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.849 20:12:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.849 20:12:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.849 20:12:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.849 20:12:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.113 20:12:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.113 20:12:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.113 20:12:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.113 20:12:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.113 20:12:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.113 20:12:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.113 20:12:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.113 20:12:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.113 20:12:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.113 20:12:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.113 20:12:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.432 20:12:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.432 20:12:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.432 20:12:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.432 20:12:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.432 20:12:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.432 20:12:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.432 20:12:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:39.432 20:12:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.432 20:12:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.432 20:12:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.432 20:12:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.432 20:12:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.432 20:12:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.690 20:12:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:39.949 [2024-07-15 20:12:18.268009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.949 [2024-07-15 20:12:18.357649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.949 [2024-07-15 20:12:18.357653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.949 [2024-07-15 20:12:18.419336] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.949 [2024-07-15 20:12:18.419423] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.229 20:12:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3923996 /var/tmp/spdk-nbd.sock 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3923996 ']' 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:43.229 20:12:21 event.app_repeat -- event/event.sh@39 -- # killprocess 3923996 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3923996 ']' 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3923996 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3923996 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3923996' 00:06:43.229 killing process with pid 3923996 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3923996 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3923996 00:06:43.229 spdk_app_start is called in Round 0. 00:06:43.229 Shutdown signal received, stop current app iteration 00:06:43.229 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 reinitialization... 00:06:43.229 spdk_app_start is called in Round 1. 00:06:43.229 Shutdown signal received, stop current app iteration 00:06:43.229 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 reinitialization... 00:06:43.229 spdk_app_start is called in Round 2. 00:06:43.229 Shutdown signal received, stop current app iteration 00:06:43.229 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 reinitialization... 00:06:43.229 spdk_app_start is called in Round 3. 00:06:43.229 Shutdown signal received, stop current app iteration 00:06:43.229 20:12:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:43.229 20:12:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:43.229 00:06:43.229 real 0m18.117s 00:06:43.229 user 0m39.573s 00:06:43.229 sys 0m3.212s 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.229 20:12:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.229 ************************************ 00:06:43.229 END TEST app_repeat 00:06:43.229 ************************************ 00:06:43.229 20:12:21 event -- common/autotest_common.sh@1142 -- # return 0 00:06:43.229 20:12:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:43.229 20:12:21 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:43.229 20:12:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.229 20:12:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.229 20:12:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.229 ************************************ 00:06:43.229 START TEST cpu_locks 00:06:43.229 ************************************ 00:06:43.229 20:12:21 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:43.229 * Looking for test storage... 00:06:43.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:43.229 20:12:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:43.229 20:12:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:43.229 20:12:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:43.229 20:12:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:43.229 20:12:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.229 20:12:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.229 20:12:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.229 ************************************ 00:06:43.229 START TEST default_locks 00:06:43.229 ************************************ 00:06:43.229 20:12:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:43.229 20:12:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3926858 00:06:43.229 20:12:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.229 20:12:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3926858 00:06:43.229 20:12:21 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3926858 ']' 00:06:43.229 20:12:21 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.229 20:12:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.229 20:12:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.229 20:12:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.230 20:12:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.230 [2024-07-15 20:12:21.713609] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:43.230 [2024-07-15 20:12:21.713687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926858 ] 00:06:43.230 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.488 [2024-07-15 20:12:21.775985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.488 [2024-07-15 20:12:21.867891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.746 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.746 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:43.746 20:12:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3926858 00:06:43.746 20:12:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3926858 00:06:43.746 20:12:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.004 lslocks: write error 00:06:44.004 20:12:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3926858 00:06:44.004 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3926858 ']' 00:06:44.004 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3926858 00:06:44.004 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:44.004 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.004 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3926858 00:06:44.004 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.004 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.004 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3926858' 00:06:44.004 killing process with pid 3926858 00:06:44.004 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3926858 00:06:44.004 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3926858 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3926858 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3926858 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3926858 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3926858 ']' 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3926858) - No such process 00:06:44.569 ERROR: process (pid: 3926858) is no longer running 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:44.569 00:06:44.569 real 0m1.260s 00:06:44.569 user 0m1.209s 00:06:44.569 sys 0m0.560s 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.569 20:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.569 ************************************ 00:06:44.569 END TEST default_locks 00:06:44.569 ************************************ 00:06:44.569 20:12:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:44.569 20:12:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:44.569 20:12:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.569 20:12:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.569 20:12:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.569 ************************************ 00:06:44.569 START TEST default_locks_via_rpc 00:06:44.569 ************************************ 00:06:44.569 20:12:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:44.569 20:12:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3927024 00:06:44.569 20:12:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.569 20:12:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3927024 00:06:44.569 20:12:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3927024 ']' 00:06:44.569 20:12:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.569 20:12:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.569 20:12:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.569 20:12:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.569 20:12:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.569 [2024-07-15 20:12:23.020678] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:44.569 [2024-07-15 20:12:23.020761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927024 ] 00:06:44.569 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.569 [2024-07-15 20:12:23.082494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.827 [2024-07-15 20:12:23.178514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3927024 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3927024 00:06:45.084 20:12:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.340 20:12:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3927024 00:06:45.340 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3927024 ']' 00:06:45.340 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3927024 00:06:45.340 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:45.340 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.340 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3927024 00:06:45.340 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.340 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.340 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3927024' 00:06:45.340 killing process with pid 3927024 00:06:45.340 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3927024 00:06:45.340 20:12:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3927024 00:06:45.907 00:06:45.907 real 0m1.219s 00:06:45.907 user 0m1.186s 00:06:45.907 sys 0m0.543s 00:06:45.907 20:12:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.907 20:12:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.907 ************************************ 00:06:45.907 END TEST default_locks_via_rpc 00:06:45.907 ************************************ 00:06:45.907 20:12:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:45.907 20:12:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:45.907 20:12:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.907 20:12:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.907 20:12:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.907 ************************************ 00:06:45.907 START TEST non_locking_app_on_locked_coremask 00:06:45.907 ************************************ 00:06:45.907 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:45.907 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3927193 00:06:45.907 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.907 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3927193 /var/tmp/spdk.sock 00:06:45.907 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3927193 ']' 00:06:45.907 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.907 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.907 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.907 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.907 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.907 [2024-07-15 20:12:24.289296] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:45.907 [2024-07-15 20:12:24.289386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927193 ] 00:06:45.907 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.907 [2024-07-15 20:12:24.352636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.165 [2024-07-15 20:12:24.448072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.424 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.424 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:46.424 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3927313 00:06:46.424 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3927313 /var/tmp/spdk2.sock 00:06:46.424 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3927313 ']' 00:06:46.424 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:46.424 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.424 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.424 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.424 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.424 20:12:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.424 [2024-07-15 20:12:24.752901] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:46.424 [2024-07-15 20:12:24.753001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927313 ] 00:06:46.424 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.424 [2024-07-15 20:12:24.835702] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.424 [2024-07-15 20:12:24.835748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.682 [2024-07-15 20:12:25.019018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.248 20:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.248 20:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:47.248 20:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3927193 00:06:47.248 20:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3927193 00:06:47.248 20:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.830 lslocks: write error 00:06:47.830 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3927193 00:06:47.830 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3927193 ']' 00:06:47.830 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3927193 00:06:47.830 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:47.830 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.830 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3927193 00:06:47.831 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.831 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.831 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3927193' 00:06:47.831 killing process with pid 3927193 00:06:47.831 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3927193 00:06:47.831 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3927193 00:06:48.397 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3927313 00:06:48.397 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3927313 ']' 00:06:48.397 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3927313 00:06:48.397 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:48.397 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.397 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3927313 00:06:48.654 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.654 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.654 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3927313' 00:06:48.654 killing process with pid 3927313 00:06:48.654 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3927313 00:06:48.654 20:12:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3927313 00:06:48.912 00:06:48.912 real 0m3.115s 00:06:48.912 user 0m3.242s 00:06:48.912 sys 0m1.081s 00:06:48.912 20:12:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.912 20:12:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.912 ************************************ 00:06:48.912 END TEST non_locking_app_on_locked_coremask 00:06:48.912 ************************************ 00:06:48.912 20:12:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:48.912 20:12:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:48.912 20:12:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.912 20:12:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.912 20:12:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.912 ************************************ 00:06:48.912 START TEST locking_app_on_unlocked_coremask 00:06:48.912 ************************************ 00:06:48.912 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:48.912 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3927626 00:06:48.912 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:48.912 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3927626 /var/tmp/spdk.sock 00:06:48.912 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3927626 ']' 00:06:48.912 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.912 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.912 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.912 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.912 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.169 [2024-07-15 20:12:27.448408] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:49.169 [2024-07-15 20:12:27.448487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927626 ] 00:06:49.169 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.169 [2024-07-15 20:12:27.505900] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.169 [2024-07-15 20:12:27.505942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.169 [2024-07-15 20:12:27.593636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.443 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.443 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:49.443 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3927695 00:06:49.443 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.443 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3927695 /var/tmp/spdk2.sock 00:06:49.443 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3927695 ']' 00:06:49.443 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.443 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.443 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.443 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.443 20:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.443 [2024-07-15 20:12:27.892837] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:49.443 [2024-07-15 20:12:27.892941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927695 ] 00:06:49.443 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.701 [2024-07-15 20:12:27.984758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.701 [2024-07-15 20:12:28.167818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.633 20:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.633 20:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:50.633 20:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3927695 00:06:50.633 20:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3927695 00:06:50.633 20:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.890 lslocks: write error 00:06:50.890 20:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3927626 00:06:50.890 20:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3927626 ']' 00:06:50.890 20:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3927626 00:06:50.890 20:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:50.890 20:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.890 20:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3927626 00:06:50.890 20:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.890 20:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.890 20:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3927626' 00:06:50.890 killing process with pid 3927626 00:06:50.890 20:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3927626 00:06:50.890 20:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3927626 00:06:51.822 20:12:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3927695 00:06:51.822 20:12:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3927695 ']' 00:06:51.822 20:12:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3927695 00:06:51.822 20:12:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:51.822 20:12:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.822 20:12:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3927695 00:06:51.822 20:12:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.822 20:12:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.822 20:12:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3927695' 00:06:51.822 killing process with pid 3927695 00:06:51.822 20:12:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3927695 00:06:51.822 20:12:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3927695 00:06:52.082 00:06:52.082 real 0m3.098s 00:06:52.082 user 0m3.233s 00:06:52.082 sys 0m1.038s 00:06:52.082 20:12:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.082 20:12:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.082 ************************************ 00:06:52.082 END TEST locking_app_on_unlocked_coremask 00:06:52.082 ************************************ 00:06:52.082 20:12:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:52.082 20:12:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:52.082 20:12:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.082 20:12:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.082 20:12:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.082 ************************************ 00:06:52.082 START TEST locking_app_on_locked_coremask 00:06:52.082 ************************************ 00:06:52.082 20:12:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:52.082 20:12:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3928056 00:06:52.082 20:12:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.082 20:12:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3928056 /var/tmp/spdk.sock 00:06:52.082 20:12:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3928056 ']' 00:06:52.082 20:12:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.082 20:12:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.082 20:12:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.082 20:12:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.082 20:12:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.082 [2024-07-15 20:12:30.594468] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:52.082 [2024-07-15 20:12:30.594568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928056 ] 00:06:52.341 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.341 [2024-07-15 20:12:30.659838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.341 [2024-07-15 20:12:30.750085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3928064 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3928064 /var/tmp/spdk2.sock 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3928064 /var/tmp/spdk2.sock 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3928064 /var/tmp/spdk2.sock 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3928064 ']' 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.599 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.599 [2024-07-15 20:12:31.061195] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:52.599 [2024-07-15 20:12:31.061294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928064 ] 00:06:52.599 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.857 [2024-07-15 20:12:31.158443] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3928056 has claimed it. 00:06:52.857 [2024-07-15 20:12:31.158521] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3928064) - No such process 00:06:53.422 ERROR: process (pid: 3928064) is no longer running 00:06:53.422 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.422 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:53.422 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:53.422 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.422 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.422 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.422 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3928056 00:06:53.422 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3928056 00:06:53.422 20:12:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.680 lslocks: write error 00:06:53.680 20:12:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3928056 00:06:53.680 20:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3928056 ']' 00:06:53.680 20:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3928056 00:06:53.680 20:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:53.680 20:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.680 20:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3928056 00:06:53.680 20:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.680 20:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.680 20:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3928056' 00:06:53.680 killing process with pid 3928056 00:06:53.680 20:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3928056 00:06:53.680 20:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3928056 00:06:54.246 00:06:54.246 real 0m1.978s 00:06:54.246 user 0m2.136s 00:06:54.246 sys 0m0.645s 00:06:54.246 20:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.246 20:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.246 ************************************ 00:06:54.246 END TEST locking_app_on_locked_coremask 00:06:54.246 ************************************ 00:06:54.246 20:12:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:54.246 20:12:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:54.247 20:12:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.247 20:12:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.247 20:12:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.247 ************************************ 00:06:54.247 START TEST locking_overlapped_coremask 00:06:54.247 ************************************ 00:06:54.247 20:12:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:54.247 20:12:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3928347 00:06:54.247 20:12:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:54.247 20:12:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3928347 /var/tmp/spdk.sock 00:06:54.247 20:12:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3928347 ']' 00:06:54.247 20:12:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.247 20:12:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.247 20:12:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.247 20:12:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.247 20:12:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.247 [2024-07-15 20:12:32.625388] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:54.247 [2024-07-15 20:12:32.625492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928347 ] 00:06:54.247 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.247 [2024-07-15 20:12:32.687551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.247 [2024-07-15 20:12:32.777804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.247 [2024-07-15 20:12:32.777858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.247 [2024-07-15 20:12:32.777861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3928361 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3928361 /var/tmp/spdk2.sock 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3928361 /var/tmp/spdk2.sock 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3928361 /var/tmp/spdk2.sock 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3928361 ']' 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.505 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.775 [2024-07-15 20:12:33.070539] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:54.775 [2024-07-15 20:12:33.070642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928361 ] 00:06:54.775 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.776 [2024-07-15 20:12:33.159574] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3928347 has claimed it. 00:06:54.776 [2024-07-15 20:12:33.159640] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3928361) - No such process 00:06:55.383 ERROR: process (pid: 3928361) is no longer running 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3928347 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3928347 ']' 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3928347 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3928347 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3928347' 00:06:55.383 killing process with pid 3928347 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3928347 00:06:55.383 20:12:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3928347 00:06:55.948 00:06:55.948 real 0m1.617s 00:06:55.948 user 0m4.354s 00:06:55.948 sys 0m0.455s 00:06:55.948 20:12:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.948 20:12:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.948 ************************************ 00:06:55.948 END TEST locking_overlapped_coremask 00:06:55.948 ************************************ 00:06:55.948 20:12:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:55.948 20:12:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:55.948 20:12:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.948 20:12:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.948 20:12:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.948 ************************************ 00:06:55.948 START TEST locking_overlapped_coremask_via_rpc 00:06:55.948 ************************************ 00:06:55.948 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:55.948 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3928527 00:06:55.948 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:55.948 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3928527 /var/tmp/spdk.sock 00:06:55.948 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3928527 ']' 00:06:55.948 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.948 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.948 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.948 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.948 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.948 [2024-07-15 20:12:34.294219] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:55.948 [2024-07-15 20:12:34.294319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928527 ] 00:06:55.948 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.948 [2024-07-15 20:12:34.356825] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.948 [2024-07-15 20:12:34.356860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.948 [2024-07-15 20:12:34.448039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.948 [2024-07-15 20:12:34.448092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.948 [2024-07-15 20:12:34.448110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.205 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.205 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:56.205 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3928654 00:06:56.205 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:56.205 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3928654 /var/tmp/spdk2.sock 00:06:56.205 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3928654 ']' 00:06:56.205 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.205 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.205 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.205 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.205 20:12:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.463 [2024-07-15 20:12:34.744516] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:56.463 [2024-07-15 20:12:34.744618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928654 ] 00:06:56.463 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.463 [2024-07-15 20:12:34.831678] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.463 [2024-07-15 20:12:34.831721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.720 [2024-07-15 20:12:35.006664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.720 [2024-07-15 20:12:35.009974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:56.720 [2024-07-15 20:12:35.009977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.284 [2024-07-15 20:12:35.691980] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3928527 has claimed it. 00:06:57.284 request: 00:06:57.284 { 00:06:57.284 "method": "framework_enable_cpumask_locks", 00:06:57.284 "req_id": 1 00:06:57.284 } 00:06:57.284 Got JSON-RPC error response 00:06:57.284 response: 00:06:57.284 { 00:06:57.284 "code": -32603, 00:06:57.284 "message": "Failed to claim CPU core: 2" 00:06:57.284 } 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3928527 /var/tmp/spdk.sock 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3928527 ']' 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.284 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.541 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.541 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:57.541 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3928654 /var/tmp/spdk2.sock 00:06:57.541 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3928654 ']' 00:06:57.541 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.541 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.541 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.541 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.541 20:12:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.798 20:12:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.798 20:12:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:57.798 20:12:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:57.798 20:12:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:57.798 20:12:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:57.798 20:12:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:57.798 00:06:57.798 real 0m1.961s 00:06:57.798 user 0m1.023s 00:06:57.798 sys 0m0.159s 00:06:57.798 20:12:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.798 20:12:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.798 ************************************ 00:06:57.798 END TEST locking_overlapped_coremask_via_rpc 00:06:57.798 ************************************ 00:06:57.798 20:12:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:57.798 20:12:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:57.798 20:12:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3928527 ]] 00:06:57.798 20:12:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3928527 00:06:57.798 20:12:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3928527 ']' 00:06:57.798 20:12:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3928527 00:06:57.798 20:12:36 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:57.798 20:12:36 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.798 20:12:36 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3928527 00:06:57.798 20:12:36 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.798 20:12:36 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.798 20:12:36 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3928527' 00:06:57.798 killing process with pid 3928527 00:06:57.798 20:12:36 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3928527 00:06:57.798 20:12:36 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3928527 00:06:58.363 20:12:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3928654 ]] 00:06:58.363 20:12:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3928654 00:06:58.363 20:12:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3928654 ']' 00:06:58.363 20:12:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3928654 00:06:58.363 20:12:36 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:58.363 20:12:36 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.363 20:12:36 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3928654 00:06:58.363 20:12:36 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:58.363 20:12:36 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:58.363 20:12:36 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3928654' 00:06:58.363 killing process with pid 3928654 00:06:58.363 20:12:36 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3928654 00:06:58.363 20:12:36 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3928654 00:06:58.621 20:12:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.621 20:12:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:58.621 20:12:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3928527 ]] 00:06:58.621 20:12:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3928527 00:06:58.621 20:12:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3928527 ']' 00:06:58.621 20:12:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3928527 00:06:58.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3928527) - No such process 00:06:58.621 20:12:37 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3928527 is not found' 00:06:58.621 Process with pid 3928527 is not found 00:06:58.621 20:12:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3928654 ]] 00:06:58.621 20:12:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3928654 00:06:58.621 20:12:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3928654 ']' 00:06:58.621 20:12:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3928654 00:06:58.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3928654) - No such process 00:06:58.621 20:12:37 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3928654 is not found' 00:06:58.621 Process with pid 3928654 is not found 00:06:58.621 20:12:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.621 00:06:58.621 real 0m15.494s 00:06:58.621 user 0m27.078s 00:06:58.621 sys 0m5.360s 00:06:58.621 20:12:37 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.621 20:12:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.621 ************************************ 00:06:58.621 END TEST cpu_locks 00:06:58.621 ************************************ 00:06:58.621 20:12:37 event -- common/autotest_common.sh@1142 -- # return 0 00:06:58.621 00:06:58.621 real 0m39.437s 00:06:58.621 user 1m15.558s 00:06:58.621 sys 0m9.372s 00:06:58.621 20:12:37 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.621 20:12:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.621 ************************************ 00:06:58.621 END TEST event 00:06:58.621 ************************************ 00:06:58.621 20:12:37 -- common/autotest_common.sh@1142 -- # return 0 00:06:58.621 20:12:37 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:58.622 20:12:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.622 20:12:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.622 20:12:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.622 ************************************ 00:06:58.622 START TEST thread 00:06:58.622 ************************************ 00:06:58.622 20:12:37 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:58.879 * Looking for test storage... 00:06:58.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:58.879 20:12:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.879 20:12:37 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:58.879 20:12:37 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.879 20:12:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.879 ************************************ 00:06:58.879 START TEST thread_poller_perf 00:06:58.879 ************************************ 00:06:58.879 20:12:37 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.879 [2024-07-15 20:12:37.235153] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:06:58.879 [2024-07-15 20:12:37.235226] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929019 ] 00:06:58.879 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.879 [2024-07-15 20:12:37.299250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.879 [2024-07-15 20:12:37.392251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.879 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:00.251 ====================================== 00:07:00.251 busy:2713414668 (cyc) 00:07:00.251 total_run_count: 292000 00:07:00.251 tsc_hz: 2700000000 (cyc) 00:07:00.251 ====================================== 00:07:00.251 poller_cost: 9292 (cyc), 3441 (nsec) 00:07:00.251 00:07:00.251 real 0m1.261s 00:07:00.251 user 0m1.176s 00:07:00.251 sys 0m0.080s 00:07:00.251 20:12:38 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.251 20:12:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.251 ************************************ 00:07:00.251 END TEST thread_poller_perf 00:07:00.251 ************************************ 00:07:00.251 20:12:38 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:00.251 20:12:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.251 20:12:38 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:00.251 20:12:38 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.251 20:12:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.251 ************************************ 00:07:00.251 START TEST thread_poller_perf 00:07:00.251 ************************************ 00:07:00.251 20:12:38 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.251 [2024-07-15 20:12:38.540579] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:00.251 [2024-07-15 20:12:38.540630] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929179 ] 00:07:00.251 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.251 [2024-07-15 20:12:38.600075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.251 [2024-07-15 20:12:38.692143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.251 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:01.649 ====================================== 00:07:01.649 busy:2702697386 (cyc) 00:07:01.649 total_run_count: 3858000 00:07:01.649 tsc_hz: 2700000000 (cyc) 00:07:01.649 ====================================== 00:07:01.649 poller_cost: 700 (cyc), 259 (nsec) 00:07:01.649 00:07:01.649 real 0m1.246s 00:07:01.649 user 0m1.157s 00:07:01.649 sys 0m0.083s 00:07:01.649 20:12:39 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.649 20:12:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.649 ************************************ 00:07:01.649 END TEST thread_poller_perf 00:07:01.649 ************************************ 00:07:01.649 20:12:39 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:01.649 20:12:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.649 00:07:01.649 real 0m2.654s 00:07:01.649 user 0m2.392s 00:07:01.649 sys 0m0.261s 00:07:01.649 20:12:39 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.649 20:12:39 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.649 ************************************ 00:07:01.649 END TEST thread 00:07:01.649 ************************************ 00:07:01.649 20:12:39 -- common/autotest_common.sh@1142 -- # return 0 00:07:01.649 20:12:39 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:01.649 20:12:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:01.649 20:12:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.649 20:12:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.649 ************************************ 00:07:01.649 START TEST accel 00:07:01.649 ************************************ 00:07:01.649 20:12:39 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:01.649 * Looking for test storage... 00:07:01.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:01.649 20:12:39 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:01.649 20:12:39 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:01.649 20:12:39 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:01.649 20:12:39 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3929377 00:07:01.649 20:12:39 accel -- accel/accel.sh@63 -- # waitforlisten 3929377 00:07:01.649 20:12:39 accel -- common/autotest_common.sh@829 -- # '[' -z 3929377 ']' 00:07:01.649 20:12:39 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.649 20:12:39 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:01.649 20:12:39 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:01.649 20:12:39 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.649 20:12:39 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.649 20:12:39 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.649 20:12:39 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.649 20:12:39 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.649 20:12:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.649 20:12:39 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.649 20:12:39 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.649 20:12:39 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.649 20:12:39 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:01.649 20:12:39 accel -- accel/accel.sh@41 -- # jq -r . 00:07:01.649 [2024-07-15 20:12:39.947520] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:01.649 [2024-07-15 20:12:39.947617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929377 ] 00:07:01.649 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.649 [2024-07-15 20:12:40.006320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.649 [2024-07-15 20:12:40.092380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.907 20:12:40 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.907 20:12:40 accel -- common/autotest_common.sh@862 -- # return 0 00:07:01.907 20:12:40 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:01.907 20:12:40 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:01.907 20:12:40 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:01.907 20:12:40 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:01.907 20:12:40 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:01.907 20:12:40 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:01.907 20:12:40 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.907 20:12:40 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:01.907 20:12:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.907 20:12:40 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.907 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.907 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.907 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.907 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.907 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.907 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.907 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.907 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.907 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.907 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.907 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.907 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.907 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.907 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.907 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.907 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.907 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.907 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.907 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.907 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.908 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.908 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.908 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.908 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.908 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.908 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.908 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.908 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.908 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.908 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.908 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.908 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.908 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.908 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.908 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.908 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.908 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.908 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.908 20:12:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.908 20:12:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.908 20:12:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.908 20:12:40 accel -- accel/accel.sh@75 -- # killprocess 3929377 00:07:01.908 20:12:40 accel -- common/autotest_common.sh@948 -- # '[' -z 3929377 ']' 00:07:01.908 20:12:40 accel -- common/autotest_common.sh@952 -- # kill -0 3929377 00:07:01.908 20:12:40 accel -- common/autotest_common.sh@953 -- # uname 00:07:01.908 20:12:40 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.908 20:12:40 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3929377 00:07:01.908 20:12:40 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.908 20:12:40 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.908 20:12:40 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3929377' 00:07:01.908 killing process with pid 3929377 00:07:01.908 20:12:40 accel -- common/autotest_common.sh@967 -- # kill 3929377 00:07:01.908 20:12:40 accel -- common/autotest_common.sh@972 -- # wait 3929377 00:07:02.473 20:12:40 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:02.473 20:12:40 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:02.473 20:12:40 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:02.473 20:12:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.473 20:12:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.473 20:12:40 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:02.473 20:12:40 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:02.473 20:12:40 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:02.473 20:12:40 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.473 20:12:40 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.473 20:12:40 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.473 20:12:40 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.473 20:12:40 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.473 20:12:40 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:02.473 20:12:40 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:02.473 20:12:40 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.473 20:12:40 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:02.473 20:12:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.473 20:12:40 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:02.473 20:12:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:02.473 20:12:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.473 20:12:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.473 ************************************ 00:07:02.473 START TEST accel_missing_filename 00:07:02.473 ************************************ 00:07:02.473 20:12:40 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:02.473 20:12:40 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:02.473 20:12:40 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:02.473 20:12:40 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:02.473 20:12:40 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.473 20:12:40 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:02.473 20:12:40 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.473 20:12:40 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:02.473 20:12:40 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:02.473 20:12:40 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:02.473 20:12:40 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.473 20:12:40 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.473 20:12:40 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.473 20:12:40 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.473 20:12:40 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.473 20:12:40 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:02.473 20:12:40 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:02.473 [2024-07-15 20:12:40.933360] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:02.473 [2024-07-15 20:12:40.933424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929541 ] 00:07:02.473 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.473 [2024-07-15 20:12:40.995680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.730 [2024-07-15 20:12:41.085420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.730 [2024-07-15 20:12:41.147038] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.730 [2024-07-15 20:12:41.235541] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:02.989 A filename is required. 00:07:02.989 20:12:41 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:02.989 20:12:41 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.989 20:12:41 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:02.989 20:12:41 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:02.989 20:12:41 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:02.989 20:12:41 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.989 00:07:02.989 real 0m0.405s 00:07:02.989 user 0m0.291s 00:07:02.989 sys 0m0.146s 00:07:02.989 20:12:41 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.989 20:12:41 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:02.989 ************************************ 00:07:02.989 END TEST accel_missing_filename 00:07:02.989 ************************************ 00:07:02.989 20:12:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.989 20:12:41 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.989 20:12:41 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:02.989 20:12:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.989 20:12:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.989 ************************************ 00:07:02.989 START TEST accel_compress_verify 00:07:02.989 ************************************ 00:07:02.989 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.989 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:02.989 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.989 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:02.989 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.989 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:02.989 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.989 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.989 20:12:41 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.989 20:12:41 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:02.989 20:12:41 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.989 20:12:41 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.989 20:12:41 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.989 20:12:41 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.989 20:12:41 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.989 20:12:41 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:02.989 20:12:41 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:02.989 [2024-07-15 20:12:41.386238] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:02.989 [2024-07-15 20:12:41.386308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929591 ] 00:07:02.989 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.989 [2024-07-15 20:12:41.452028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.249 [2024-07-15 20:12:41.544585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.249 [2024-07-15 20:12:41.606143] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.249 [2024-07-15 20:12:41.694602] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:03.249 00:07:03.249 Compression does not support the verify option, aborting. 00:07:03.249 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:03.249 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.249 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:03.249 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:03.249 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:03.249 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.249 00:07:03.249 real 0m0.409s 00:07:03.249 user 0m0.301s 00:07:03.249 sys 0m0.143s 00:07:03.249 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.249 20:12:41 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:03.249 ************************************ 00:07:03.249 END TEST accel_compress_verify 00:07:03.249 ************************************ 00:07:03.508 20:12:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.508 20:12:41 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:03.508 20:12:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:03.508 20:12:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.508 20:12:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.508 ************************************ 00:07:03.508 START TEST accel_wrong_workload 00:07:03.508 ************************************ 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:03.508 20:12:41 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:03.508 20:12:41 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:03.508 20:12:41 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.508 20:12:41 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.508 20:12:41 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.508 20:12:41 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.508 20:12:41 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.508 20:12:41 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:03.508 20:12:41 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:03.508 Unsupported workload type: foobar 00:07:03.508 [2024-07-15 20:12:41.846244] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:03.508 accel_perf options: 00:07:03.508 [-h help message] 00:07:03.508 [-q queue depth per core] 00:07:03.508 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:03.508 [-T number of threads per core 00:07:03.508 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:03.508 [-t time in seconds] 00:07:03.508 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:03.508 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:03.508 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:03.508 [-l for compress/decompress workloads, name of uncompressed input file 00:07:03.508 [-S for crc32c workload, use this seed value (default 0) 00:07:03.508 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:03.508 [-f for fill workload, use this BYTE value (default 255) 00:07:03.508 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:03.508 [-y verify result if this switch is on] 00:07:03.508 [-a tasks to allocate per core (default: same value as -q)] 00:07:03.508 Can be used to spread operations across a wider range of memory. 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.508 00:07:03.508 real 0m0.023s 00:07:03.508 user 0m0.017s 00:07:03.508 sys 0m0.006s 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.508 20:12:41 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:03.508 ************************************ 00:07:03.508 END TEST accel_wrong_workload 00:07:03.508 ************************************ 00:07:03.508 Error: writing output failed: Broken pipe 00:07:03.508 20:12:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.508 20:12:41 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:03.509 20:12:41 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:03.509 20:12:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.509 20:12:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.509 ************************************ 00:07:03.509 START TEST accel_negative_buffers 00:07:03.509 ************************************ 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:03.509 20:12:41 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:03.509 20:12:41 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:03.509 20:12:41 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.509 20:12:41 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.509 20:12:41 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.509 20:12:41 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.509 20:12:41 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.509 20:12:41 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:03.509 20:12:41 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:03.509 -x option must be non-negative. 00:07:03.509 [2024-07-15 20:12:41.908449] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:03.509 accel_perf options: 00:07:03.509 [-h help message] 00:07:03.509 [-q queue depth per core] 00:07:03.509 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:03.509 [-T number of threads per core 00:07:03.509 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:03.509 [-t time in seconds] 00:07:03.509 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:03.509 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:03.509 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:03.509 [-l for compress/decompress workloads, name of uncompressed input file 00:07:03.509 [-S for crc32c workload, use this seed value (default 0) 00:07:03.509 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:03.509 [-f for fill workload, use this BYTE value (default 255) 00:07:03.509 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:03.509 [-y verify result if this switch is on] 00:07:03.509 [-a tasks to allocate per core (default: same value as -q)] 00:07:03.509 Can be used to spread operations across a wider range of memory. 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.509 00:07:03.509 real 0m0.018s 00:07:03.509 user 0m0.011s 00:07:03.509 sys 0m0.007s 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.509 20:12:41 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:03.509 ************************************ 00:07:03.509 END TEST accel_negative_buffers 00:07:03.509 ************************************ 00:07:03.509 20:12:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.509 20:12:41 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:03.509 20:12:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:03.509 20:12:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.509 20:12:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.509 Error: writing output failed: Broken pipe 00:07:03.509 ************************************ 00:07:03.509 START TEST accel_crc32c 00:07:03.509 ************************************ 00:07:03.509 20:12:41 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:03.509 20:12:41 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:03.509 [2024-07-15 20:12:41.967331] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:03.509 [2024-07-15 20:12:41.967395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929751 ] 00:07:03.509 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.509 [2024-07-15 20:12:42.028998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.767 [2024-07-15 20:12:42.122897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.767 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.768 20:12:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.139 20:12:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.139 20:12:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:05.140 20:12:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.140 00:07:05.140 real 0m1.393s 00:07:05.140 user 0m1.251s 00:07:05.140 sys 0m0.145s 00:07:05.140 20:12:43 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.140 20:12:43 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:05.140 ************************************ 00:07:05.140 END TEST accel_crc32c 00:07:05.140 ************************************ 00:07:05.140 20:12:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.140 20:12:43 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:05.140 20:12:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:05.140 20:12:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.140 20:12:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.140 ************************************ 00:07:05.140 START TEST accel_crc32c_C2 00:07:05.140 ************************************ 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:05.140 [2024-07-15 20:12:43.408365] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:05.140 [2024-07-15 20:12:43.408430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929916 ] 00:07:05.140 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.140 [2024-07-15 20:12:43.471589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.140 [2024-07-15 20:12:43.564246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 20:12:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.512 00:07:06.512 real 0m1.408s 00:07:06.512 user 0m1.258s 00:07:06.512 sys 0m0.152s 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.512 20:12:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:06.512 ************************************ 00:07:06.512 END TEST accel_crc32c_C2 00:07:06.512 ************************************ 00:07:06.512 20:12:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.512 20:12:44 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:06.512 20:12:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:06.512 20:12:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.512 20:12:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.512 ************************************ 00:07:06.512 START TEST accel_copy 00:07:06.512 ************************************ 00:07:06.512 20:12:44 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:06.512 20:12:44 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:06.512 [2024-07-15 20:12:44.863084] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:06.512 [2024-07-15 20:12:44.863144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930141 ] 00:07:06.512 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.512 [2024-07-15 20:12:44.925614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.512 [2024-07-15 20:12:45.018959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.771 20:12:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.141 20:12:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.142 20:12:46 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:08.142 20:12:46 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.142 00:07:08.142 real 0m1.409s 00:07:08.142 user 0m1.266s 00:07:08.142 sys 0m0.145s 00:07:08.142 20:12:46 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.142 20:12:46 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:08.142 ************************************ 00:07:08.142 END TEST accel_copy 00:07:08.142 ************************************ 00:07:08.142 20:12:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.142 20:12:46 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.142 20:12:46 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:08.142 20:12:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.142 20:12:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.142 ************************************ 00:07:08.142 START TEST accel_fill 00:07:08.142 ************************************ 00:07:08.142 20:12:46 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:08.142 [2024-07-15 20:12:46.319086] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:08.142 [2024-07-15 20:12:46.319154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930335 ] 00:07:08.142 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.142 [2024-07-15 20:12:46.380672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.142 [2024-07-15 20:12:46.475303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.142 20:12:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:09.557 20:12:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.557 00:07:09.557 real 0m1.403s 00:07:09.557 user 0m1.258s 00:07:09.557 sys 0m0.147s 00:07:09.557 20:12:47 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.557 20:12:47 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:09.557 ************************************ 00:07:09.557 END TEST accel_fill 00:07:09.557 ************************************ 00:07:09.557 20:12:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.557 20:12:47 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:09.557 20:12:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:09.557 20:12:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.557 20:12:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.557 ************************************ 00:07:09.557 START TEST accel_copy_crc32c 00:07:09.557 ************************************ 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:09.557 [2024-07-15 20:12:47.765552] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:09.557 [2024-07-15 20:12:47.765614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930501 ] 00:07:09.557 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.557 [2024-07-15 20:12:47.825720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.557 [2024-07-15 20:12:47.916909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.557 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.558 20:12:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.926 00:07:10.926 real 0m1.406s 00:07:10.926 user 0m1.269s 00:07:10.926 sys 0m0.139s 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.926 20:12:49 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:10.926 ************************************ 00:07:10.926 END TEST accel_copy_crc32c 00:07:10.926 ************************************ 00:07:10.926 20:12:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.926 20:12:49 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:10.926 20:12:49 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:10.926 20:12:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.927 20:12:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.927 ************************************ 00:07:10.927 START TEST accel_copy_crc32c_C2 00:07:10.927 ************************************ 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:10.927 [2024-07-15 20:12:49.219225] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:10.927 [2024-07-15 20:12:49.219309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930655 ] 00:07:10.927 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.927 [2024-07-15 20:12:49.282133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.927 [2024-07-15 20:12:49.380358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.927 20:12:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.338 00:07:12.338 real 0m1.411s 00:07:12.338 user 0m1.270s 00:07:12.338 sys 0m0.144s 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.338 20:12:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:12.338 ************************************ 00:07:12.338 END TEST accel_copy_crc32c_C2 00:07:12.338 ************************************ 00:07:12.338 20:12:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.338 20:12:50 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:12.338 20:12:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:12.338 20:12:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.338 20:12:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.338 ************************************ 00:07:12.338 START TEST accel_dualcast 00:07:12.338 ************************************ 00:07:12.338 20:12:50 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:12.338 20:12:50 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:12.338 [2024-07-15 20:12:50.681693] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:12.338 [2024-07-15 20:12:50.681758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930929 ] 00:07:12.338 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.338 [2024-07-15 20:12:50.744834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.338 [2024-07-15 20:12:50.836317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.597 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.597 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.597 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.597 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.597 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.597 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.597 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.597 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.597 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:12.597 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.597 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.598 20:12:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:13.970 20:12:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.970 00:07:13.970 real 0m1.413s 00:07:13.970 user 0m1.264s 00:07:13.970 sys 0m0.151s 00:07:13.970 20:12:52 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.970 20:12:52 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:13.970 ************************************ 00:07:13.970 END TEST accel_dualcast 00:07:13.970 ************************************ 00:07:13.970 20:12:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.970 20:12:52 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:13.970 20:12:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:13.970 20:12:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.970 20:12:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.970 ************************************ 00:07:13.970 START TEST accel_compare 00:07:13.970 ************************************ 00:07:13.970 20:12:52 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:13.970 [2024-07-15 20:12:52.144617] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:13.970 [2024-07-15 20:12:52.144681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931082 ] 00:07:13.970 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.970 [2024-07-15 20:12:52.207771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.970 [2024-07-15 20:12:52.299468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.970 20:12:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:15.343 20:12:53 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.343 00:07:15.343 real 0m1.391s 00:07:15.343 user 0m1.257s 00:07:15.343 sys 0m0.136s 00:07:15.343 20:12:53 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.343 20:12:53 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:15.343 ************************************ 00:07:15.343 END TEST accel_compare 00:07:15.343 ************************************ 00:07:15.343 20:12:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.343 20:12:53 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:15.343 20:12:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:15.343 20:12:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.343 20:12:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.343 ************************************ 00:07:15.343 START TEST accel_xor 00:07:15.343 ************************************ 00:07:15.343 20:12:53 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:15.343 [2024-07-15 20:12:53.577476] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:15.343 [2024-07-15 20:12:53.577528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931248 ] 00:07:15.343 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.343 [2024-07-15 20:12:53.638322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.343 [2024-07-15 20:12:53.730953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.343 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.344 20:12:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.712 20:12:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:16.713 20:12:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.713 00:07:16.713 real 0m1.407s 00:07:16.713 user 0m1.270s 00:07:16.713 sys 0m0.139s 00:07:16.713 20:12:54 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.713 20:12:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:16.713 ************************************ 00:07:16.713 END TEST accel_xor 00:07:16.713 ************************************ 00:07:16.713 20:12:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.713 20:12:54 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:16.713 20:12:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:16.713 20:12:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.713 20:12:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.713 ************************************ 00:07:16.713 START TEST accel_xor 00:07:16.713 ************************************ 00:07:16.713 20:12:55 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:16.713 [2024-07-15 20:12:55.030393] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:16.713 [2024-07-15 20:12:55.030445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931430 ] 00:07:16.713 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.713 [2024-07-15 20:12:55.090553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.713 [2024-07-15 20:12:55.182563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.713 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.982 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.983 20:12:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 20:12:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.914 20:12:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:17.915 20:12:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.915 00:07:17.915 real 0m1.406s 00:07:17.915 user 0m1.263s 00:07:17.915 sys 0m0.145s 00:07:17.915 20:12:56 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.915 20:12:56 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:17.915 ************************************ 00:07:17.915 END TEST accel_xor 00:07:17.915 ************************************ 00:07:17.915 20:12:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.915 20:12:56 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:17.915 20:12:56 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:17.915 20:12:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.915 20:12:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.172 ************************************ 00:07:18.172 START TEST accel_dif_verify 00:07:18.172 ************************************ 00:07:18.172 20:12:56 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:18.172 20:12:56 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:18.172 20:12:56 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:18.172 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.172 20:12:56 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:18.173 [2024-07-15 20:12:56.483409] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:18.173 [2024-07-15 20:12:56.483462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931668 ] 00:07:18.173 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.173 [2024-07-15 20:12:56.545868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.173 [2024-07-15 20:12:56.639013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.173 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:18.430 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.431 20:12:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:19.363 20:12:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.363 00:07:19.363 real 0m1.410s 00:07:19.363 user 0m1.262s 00:07:19.363 sys 0m0.152s 00:07:19.363 20:12:57 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.363 20:12:57 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:19.363 ************************************ 00:07:19.363 END TEST accel_dif_verify 00:07:19.363 ************************************ 00:07:19.621 20:12:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.621 20:12:57 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:19.621 20:12:57 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:19.621 20:12:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.621 20:12:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.621 ************************************ 00:07:19.621 START TEST accel_dif_generate 00:07:19.621 ************************************ 00:07:19.621 20:12:57 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:19.621 20:12:57 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:19.621 [2024-07-15 20:12:57.945133] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:19.621 [2024-07-15 20:12:57.945193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931829 ] 00:07:19.621 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.621 [2024-07-15 20:12:58.007203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.621 [2024-07-15 20:12:58.096628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.621 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.621 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.621 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.621 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.621 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.621 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.621 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.621 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.621 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:19.878 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.879 20:12:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:20.811 20:12:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.811 00:07:20.811 real 0m1.382s 00:07:20.811 user 0m1.256s 00:07:20.811 sys 0m0.131s 00:07:20.811 20:12:59 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.811 20:12:59 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:20.811 ************************************ 00:07:20.811 END TEST accel_dif_generate 00:07:20.811 ************************************ 00:07:20.811 20:12:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.811 20:12:59 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:20.811 20:12:59 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:20.811 20:12:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.811 20:12:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.070 ************************************ 00:07:21.070 START TEST accel_dif_generate_copy 00:07:21.070 ************************************ 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:21.070 [2024-07-15 20:12:59.373782] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:21.070 [2024-07-15 20:12:59.373849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931988 ] 00:07:21.070 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.070 [2024-07-15 20:12:59.436704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.070 [2024-07-15 20:12:59.529636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.070 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.071 20:12:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.444 00:07:22.444 real 0m1.404s 00:07:22.444 user 0m1.268s 00:07:22.444 sys 0m0.138s 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.444 20:13:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:22.444 ************************************ 00:07:22.444 END TEST accel_dif_generate_copy 00:07:22.444 ************************************ 00:07:22.444 20:13:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.444 20:13:00 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:22.444 20:13:00 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.444 20:13:00 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:22.444 20:13:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.444 20:13:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.444 ************************************ 00:07:22.444 START TEST accel_comp 00:07:22.444 ************************************ 00:07:22.444 20:13:00 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:22.444 20:13:00 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:22.444 [2024-07-15 20:13:00.825584] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:22.445 [2024-07-15 20:13:00.825650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932259 ] 00:07:22.445 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.445 [2024-07-15 20:13:00.890679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.702 [2024-07-15 20:13:00.984047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.702 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.703 20:13:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.083 20:13:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.083 20:13:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.083 20:13:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.083 20:13:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:24.084 20:13:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.084 00:07:24.084 real 0m1.416s 00:07:24.084 user 0m1.276s 00:07:24.084 sys 0m0.144s 00:07:24.084 20:13:02 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.084 20:13:02 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:24.084 ************************************ 00:07:24.084 END TEST accel_comp 00:07:24.084 ************************************ 00:07:24.084 20:13:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.084 20:13:02 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.084 20:13:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:24.084 20:13:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.084 20:13:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.084 ************************************ 00:07:24.084 START TEST accel_decomp 00:07:24.084 ************************************ 00:07:24.084 20:13:02 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:24.084 [2024-07-15 20:13:02.286371] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:24.084 [2024-07-15 20:13:02.286423] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932413 ] 00:07:24.084 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.084 [2024-07-15 20:13:02.347545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.084 [2024-07-15 20:13:02.440395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 20:13:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.454 20:13:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.454 00:07:25.454 real 0m1.409s 00:07:25.454 user 0m1.268s 00:07:25.454 sys 0m0.144s 00:07:25.454 20:13:03 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.454 20:13:03 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:25.454 ************************************ 00:07:25.454 END TEST accel_decomp 00:07:25.454 ************************************ 00:07:25.454 20:13:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.454 20:13:03 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.454 20:13:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:25.454 20:13:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.454 20:13:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.454 ************************************ 00:07:25.454 START TEST accel_decomp_full 00:07:25.454 ************************************ 00:07:25.454 20:13:03 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.454 20:13:03 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:25.454 20:13:03 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:25.454 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.454 20:13:03 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.454 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.454 20:13:03 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.454 20:13:03 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:25.455 [2024-07-15 20:13:03.748892] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:25.455 [2024-07-15 20:13:03.748975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932574 ] 00:07:25.455 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.455 [2024-07-15 20:13:03.811592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.455 [2024-07-15 20:13:03.901974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.455 20:13:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.823 20:13:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.823 20:13:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.823 20:13:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.823 20:13:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.823 20:13:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.824 20:13:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.824 00:07:26.824 real 0m1.418s 00:07:26.824 user 0m1.264s 00:07:26.824 sys 0m0.157s 00:07:26.824 20:13:05 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.824 20:13:05 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:26.824 ************************************ 00:07:26.824 END TEST accel_decomp_full 00:07:26.824 ************************************ 00:07:26.824 20:13:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.824 20:13:05 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.824 20:13:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:26.824 20:13:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.824 20:13:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.824 ************************************ 00:07:26.824 START TEST accel_decomp_mcore 00:07:26.824 ************************************ 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:26.824 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:26.824 [2024-07-15 20:13:05.211330] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:26.824 [2024-07-15 20:13:05.211396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932733 ] 00:07:26.824 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.824 [2024-07-15 20:13:05.273823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.082 [2024-07-15 20:13:05.369219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.082 [2024-07-15 20:13:05.369273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.082 [2024-07-15 20:13:05.369389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.082 [2024-07-15 20:13:05.369392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.082 20:13:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.454 00:07:28.454 real 0m1.403s 00:07:28.454 user 0m4.683s 00:07:28.454 sys 0m0.148s 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.454 20:13:06 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:28.454 ************************************ 00:07:28.454 END TEST accel_decomp_mcore 00:07:28.454 ************************************ 00:07:28.454 20:13:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.454 20:13:06 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.454 20:13:06 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:28.454 20:13:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.454 20:13:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.454 ************************************ 00:07:28.454 START TEST accel_decomp_full_mcore 00:07:28.454 ************************************ 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:28.454 [2024-07-15 20:13:06.661789] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:28.454 [2024-07-15 20:13:06.661851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933010 ] 00:07:28.454 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.454 [2024-07-15 20:13:06.725108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.454 [2024-07-15 20:13:06.821449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.454 [2024-07-15 20:13:06.821517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.454 [2024-07-15 20:13:06.821608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.454 [2024-07-15 20:13:06.821611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.454 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.455 20:13:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.862 00:07:29.862 real 0m1.433s 00:07:29.862 user 0m4.769s 00:07:29.862 sys 0m0.159s 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.862 20:13:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:29.862 ************************************ 00:07:29.862 END TEST accel_decomp_full_mcore 00:07:29.862 ************************************ 00:07:29.862 20:13:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.862 20:13:08 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.862 20:13:08 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:29.862 20:13:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.862 20:13:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.862 ************************************ 00:07:29.862 START TEST accel_decomp_mthread 00:07:29.862 ************************************ 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:29.863 [2024-07-15 20:13:08.142804] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:29.863 [2024-07-15 20:13:08.142868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933166 ] 00:07:29.863 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.863 [2024-07-15 20:13:08.204701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.863 [2024-07-15 20:13:08.296064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 20:13:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.236 00:07:31.236 real 0m1.403s 00:07:31.236 user 0m1.259s 00:07:31.236 sys 0m0.148s 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.236 20:13:09 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:31.236 ************************************ 00:07:31.236 END TEST accel_decomp_mthread 00:07:31.236 ************************************ 00:07:31.236 20:13:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.236 20:13:09 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.236 20:13:09 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:31.236 20:13:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.236 20:13:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.236 ************************************ 00:07:31.236 START TEST accel_decomp_full_mthread 00:07:31.236 ************************************ 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:31.236 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:31.236 [2024-07-15 20:13:09.589151] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:31.236 [2024-07-15 20:13:09.589232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933328 ] 00:07:31.236 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.236 [2024-07-15 20:13:09.646280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.236 [2024-07-15 20:13:09.739038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.494 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.494 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.494 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.494 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.495 20:13:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.869 20:13:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.869 00:07:32.869 real 0m1.431s 00:07:32.869 user 0m1.288s 00:07:32.869 sys 0m0.146s 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.869 20:13:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:32.869 ************************************ 00:07:32.869 END TEST accel_decomp_full_mthread 00:07:32.869 ************************************ 00:07:32.869 20:13:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.869 20:13:11 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:32.869 20:13:11 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:32.869 20:13:11 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:32.869 20:13:11 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:32.869 20:13:11 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.869 20:13:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.869 20:13:11 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.869 20:13:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.869 20:13:11 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.869 20:13:11 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.869 20:13:11 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.869 20:13:11 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:32.869 20:13:11 accel -- accel/accel.sh@41 -- # jq -r . 00:07:32.869 ************************************ 00:07:32.869 START TEST accel_dif_functional_tests 00:07:32.870 ************************************ 00:07:32.870 20:13:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:32.870 [2024-07-15 20:13:11.088327] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:32.870 [2024-07-15 20:13:11.088386] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933560 ] 00:07:32.870 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.870 [2024-07-15 20:13:11.148624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.870 [2024-07-15 20:13:11.243444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.870 [2024-07-15 20:13:11.243512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.870 [2024-07-15 20:13:11.243515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.870 00:07:32.870 00:07:32.870 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.870 http://cunit.sourceforge.net/ 00:07:32.870 00:07:32.870 00:07:32.870 Suite: accel_dif 00:07:32.870 Test: verify: DIF generated, GUARD check ...passed 00:07:32.870 Test: verify: DIF generated, APPTAG check ...passed 00:07:32.870 Test: verify: DIF generated, REFTAG check ...passed 00:07:32.870 Test: verify: DIF not generated, GUARD check ...[2024-07-15 20:13:11.336508] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:32.870 passed 00:07:32.870 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 20:13:11.336574] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:32.870 passed 00:07:32.870 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 20:13:11.336604] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:32.870 passed 00:07:32.870 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:32.870 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 20:13:11.336673] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:32.870 passed 00:07:32.870 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:32.870 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:32.870 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:32.870 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 20:13:11.336798] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:32.870 passed 00:07:32.870 Test: verify copy: DIF generated, GUARD check ...passed 00:07:32.870 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:32.870 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:32.870 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 20:13:11.336975] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:32.870 passed 00:07:32.870 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 20:13:11.337014] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:32.870 passed 00:07:32.870 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 20:13:11.337046] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:32.870 passed 00:07:32.870 Test: generate copy: DIF generated, GUARD check ...passed 00:07:32.870 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:32.870 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:32.870 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:32.870 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:32.870 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:32.870 Test: generate copy: iovecs-len validate ...[2024-07-15 20:13:11.337268] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:32.870 passed 00:07:32.870 Test: generate copy: buffer alignment validate ...passed 00:07:32.870 00:07:32.870 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.870 suites 1 1 n/a 0 0 00:07:32.870 tests 26 26 26 0 0 00:07:32.870 asserts 115 115 115 0 n/a 00:07:32.870 00:07:32.870 Elapsed time = 0.002 seconds 00:07:33.129 00:07:33.129 real 0m0.500s 00:07:33.129 user 0m0.786s 00:07:33.129 sys 0m0.176s 00:07:33.129 20:13:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.129 20:13:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:33.129 ************************************ 00:07:33.129 END TEST accel_dif_functional_tests 00:07:33.129 ************************************ 00:07:33.129 20:13:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.129 00:07:33.129 real 0m31.727s 00:07:33.129 user 0m35.114s 00:07:33.129 sys 0m4.588s 00:07:33.129 20:13:11 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.129 20:13:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.129 ************************************ 00:07:33.129 END TEST accel 00:07:33.129 ************************************ 00:07:33.129 20:13:11 -- common/autotest_common.sh@1142 -- # return 0 00:07:33.129 20:13:11 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:33.129 20:13:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:33.129 20:13:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.129 20:13:11 -- common/autotest_common.sh@10 -- # set +x 00:07:33.129 ************************************ 00:07:33.129 START TEST accel_rpc 00:07:33.129 ************************************ 00:07:33.129 20:13:11 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:33.387 * Looking for test storage... 00:07:33.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:33.387 20:13:11 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:33.388 20:13:11 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3933663 00:07:33.388 20:13:11 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:33.388 20:13:11 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3933663 00:07:33.388 20:13:11 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3933663 ']' 00:07:33.388 20:13:11 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.388 20:13:11 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.388 20:13:11 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.388 20:13:11 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.388 20:13:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.388 [2024-07-15 20:13:11.733312] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:33.388 [2024-07-15 20:13:11.733396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933663 ] 00:07:33.388 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.388 [2024-07-15 20:13:11.789721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.388 [2024-07-15 20:13:11.876342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.646 20:13:11 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.646 20:13:11 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:33.646 20:13:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:33.646 20:13:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:33.646 20:13:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:33.646 20:13:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:33.646 20:13:11 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:33.646 20:13:11 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:33.646 20:13:11 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.646 20:13:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.646 ************************************ 00:07:33.646 START TEST accel_assign_opcode 00:07:33.646 ************************************ 00:07:33.646 20:13:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:33.646 20:13:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:33.646 20:13:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.646 20:13:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:33.646 [2024-07-15 20:13:11.969007] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:33.646 20:13:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.646 20:13:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:33.646 20:13:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.646 20:13:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:33.646 [2024-07-15 20:13:11.977035] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:33.646 20:13:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.646 20:13:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:33.646 20:13:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.646 20:13:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:33.905 20:13:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.905 20:13:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:33.905 20:13:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.905 20:13:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:33.905 20:13:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:33.905 20:13:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:33.905 20:13:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.905 software 00:07:33.905 00:07:33.905 real 0m0.300s 00:07:33.905 user 0m0.045s 00:07:33.905 sys 0m0.007s 00:07:33.905 20:13:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.905 20:13:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:33.905 ************************************ 00:07:33.905 END TEST accel_assign_opcode 00:07:33.905 ************************************ 00:07:33.905 20:13:12 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:33.905 20:13:12 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3933663 00:07:33.905 20:13:12 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3933663 ']' 00:07:33.905 20:13:12 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3933663 00:07:33.905 20:13:12 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:33.905 20:13:12 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.905 20:13:12 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3933663 00:07:33.905 20:13:12 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:33.905 20:13:12 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:33.905 20:13:12 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3933663' 00:07:33.905 killing process with pid 3933663 00:07:33.905 20:13:12 accel_rpc -- common/autotest_common.sh@967 -- # kill 3933663 00:07:33.905 20:13:12 accel_rpc -- common/autotest_common.sh@972 -- # wait 3933663 00:07:34.472 00:07:34.472 real 0m1.112s 00:07:34.472 user 0m1.048s 00:07:34.472 sys 0m0.429s 00:07:34.472 20:13:12 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.472 20:13:12 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.472 ************************************ 00:07:34.472 END TEST accel_rpc 00:07:34.472 ************************************ 00:07:34.472 20:13:12 -- common/autotest_common.sh@1142 -- # return 0 00:07:34.472 20:13:12 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:34.472 20:13:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.472 20:13:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.472 20:13:12 -- common/autotest_common.sh@10 -- # set +x 00:07:34.472 ************************************ 00:07:34.472 START TEST app_cmdline 00:07:34.472 ************************************ 00:07:34.472 20:13:12 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:34.472 * Looking for test storage... 00:07:34.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:34.472 20:13:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:34.472 20:13:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3933877 00:07:34.472 20:13:12 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:34.472 20:13:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3933877 00:07:34.472 20:13:12 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3933877 ']' 00:07:34.472 20:13:12 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.472 20:13:12 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.472 20:13:12 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.472 20:13:12 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.472 20:13:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.472 [2024-07-15 20:13:12.891523] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:34.472 [2024-07-15 20:13:12.891607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933877 ] 00:07:34.472 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.472 [2024-07-15 20:13:12.950929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.731 [2024-07-15 20:13:13.037291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.988 20:13:13 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.989 20:13:13 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:34.989 20:13:13 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:34.989 { 00:07:34.989 "version": "SPDK v24.09-pre git sha1 a95bbf233", 00:07:34.989 "fields": { 00:07:34.989 "major": 24, 00:07:34.989 "minor": 9, 00:07:34.989 "patch": 0, 00:07:34.989 "suffix": "-pre", 00:07:34.989 "commit": "a95bbf233" 00:07:34.989 } 00:07:34.989 } 00:07:34.989 20:13:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:34.989 20:13:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:34.989 20:13:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:34.989 20:13:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:34.989 20:13:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:35.246 20:13:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.246 20:13:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.246 20:13:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:35.246 20:13:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:35.246 20:13:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:35.246 20:13:13 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.504 request: 00:07:35.504 { 00:07:35.504 "method": "env_dpdk_get_mem_stats", 00:07:35.504 "req_id": 1 00:07:35.504 } 00:07:35.504 Got JSON-RPC error response 00:07:35.504 response: 00:07:35.504 { 00:07:35.504 "code": -32601, 00:07:35.504 "message": "Method not found" 00:07:35.504 } 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.504 20:13:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3933877 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3933877 ']' 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3933877 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3933877 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3933877' 00:07:35.504 killing process with pid 3933877 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@967 -- # kill 3933877 00:07:35.504 20:13:13 app_cmdline -- common/autotest_common.sh@972 -- # wait 3933877 00:07:35.763 00:07:35.763 real 0m1.456s 00:07:35.763 user 0m1.779s 00:07:35.763 sys 0m0.463s 00:07:35.763 20:13:14 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.763 20:13:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.763 ************************************ 00:07:35.763 END TEST app_cmdline 00:07:35.763 ************************************ 00:07:35.763 20:13:14 -- common/autotest_common.sh@1142 -- # return 0 00:07:35.763 20:13:14 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:35.763 20:13:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.763 20:13:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.763 20:13:14 -- common/autotest_common.sh@10 -- # set +x 00:07:36.021 ************************************ 00:07:36.021 START TEST version 00:07:36.021 ************************************ 00:07:36.021 20:13:14 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:36.021 * Looking for test storage... 00:07:36.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:36.021 20:13:14 version -- app/version.sh@17 -- # get_header_version major 00:07:36.021 20:13:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:36.021 20:13:14 version -- app/version.sh@14 -- # cut -f2 00:07:36.021 20:13:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.021 20:13:14 version -- app/version.sh@17 -- # major=24 00:07:36.021 20:13:14 version -- app/version.sh@18 -- # get_header_version minor 00:07:36.021 20:13:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:36.021 20:13:14 version -- app/version.sh@14 -- # cut -f2 00:07:36.021 20:13:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.021 20:13:14 version -- app/version.sh@18 -- # minor=9 00:07:36.021 20:13:14 version -- app/version.sh@19 -- # get_header_version patch 00:07:36.021 20:13:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:36.021 20:13:14 version -- app/version.sh@14 -- # cut -f2 00:07:36.021 20:13:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.021 20:13:14 version -- app/version.sh@19 -- # patch=0 00:07:36.021 20:13:14 version -- app/version.sh@20 -- # get_header_version suffix 00:07:36.021 20:13:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:36.021 20:13:14 version -- app/version.sh@14 -- # cut -f2 00:07:36.021 20:13:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.021 20:13:14 version -- app/version.sh@20 -- # suffix=-pre 00:07:36.021 20:13:14 version -- app/version.sh@22 -- # version=24.9 00:07:36.021 20:13:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:36.021 20:13:14 version -- app/version.sh@28 -- # version=24.9rc0 00:07:36.021 20:13:14 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:36.021 20:13:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:36.021 20:13:14 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:36.021 20:13:14 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:36.021 00:07:36.021 real 0m0.107s 00:07:36.021 user 0m0.057s 00:07:36.021 sys 0m0.073s 00:07:36.021 20:13:14 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.021 20:13:14 version -- common/autotest_common.sh@10 -- # set +x 00:07:36.021 ************************************ 00:07:36.021 END TEST version 00:07:36.021 ************************************ 00:07:36.021 20:13:14 -- common/autotest_common.sh@1142 -- # return 0 00:07:36.021 20:13:14 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:36.021 20:13:14 -- spdk/autotest.sh@198 -- # uname -s 00:07:36.021 20:13:14 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:36.021 20:13:14 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:36.021 20:13:14 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:36.021 20:13:14 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:36.021 20:13:14 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:36.021 20:13:14 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:36.021 20:13:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:36.021 20:13:14 -- common/autotest_common.sh@10 -- # set +x 00:07:36.021 20:13:14 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:36.021 20:13:14 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:36.021 20:13:14 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:36.021 20:13:14 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:36.021 20:13:14 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:36.021 20:13:14 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:36.021 20:13:14 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:36.021 20:13:14 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:36.021 20:13:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.021 20:13:14 -- common/autotest_common.sh@10 -- # set +x 00:07:36.021 ************************************ 00:07:36.021 START TEST nvmf_tcp 00:07:36.021 ************************************ 00:07:36.021 20:13:14 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:36.021 * Looking for test storage... 00:07:36.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.021 20:13:14 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.022 20:13:14 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.022 20:13:14 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.022 20:13:14 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.022 20:13:14 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.022 20:13:14 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.022 20:13:14 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.022 20:13:14 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:36.022 20:13:14 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:36.022 20:13:14 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:36.022 20:13:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:36.022 20:13:14 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:36.022 20:13:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:36.022 20:13:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.022 20:13:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:36.022 ************************************ 00:07:36.022 START TEST nvmf_example 00:07:36.022 ************************************ 00:07:36.022 20:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:36.281 * Looking for test storage... 00:07:36.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:36.281 20:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:36.282 20:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.185 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:38.186 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:38.186 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:38.186 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:38.186 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.186 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:38.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:07:38.445 00:07:38.445 --- 10.0.0.2 ping statistics --- 00:07:38.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.445 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:07:38.445 00:07:38.445 --- 10.0.0.1 ping statistics --- 00:07:38.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.445 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3935899 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3935899 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3935899 ']' 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.445 20:13:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.445 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:39.379 20:13:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:39.379 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.572 Initializing NVMe Controllers 00:07:51.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:51.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:51.573 Initialization complete. Launching workers. 00:07:51.573 ======================================================== 00:07:51.573 Latency(us) 00:07:51.573 Device Information : IOPS MiB/s Average min max 00:07:51.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14083.94 55.02 4543.67 891.26 15628.03 00:07:51.573 ======================================================== 00:07:51.573 Total : 14083.94 55.02 4543.67 891.26 15628.03 00:07:51.573 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:51.573 rmmod nvme_tcp 00:07:51.573 rmmod nvme_fabrics 00:07:51.573 rmmod nvme_keyring 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3935899 ']' 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3935899 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3935899 ']' 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3935899 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3935899 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3935899' 00:07:51.573 killing process with pid 3935899 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3935899 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3935899 00:07:51.573 nvmf threads initialize successfully 00:07:51.573 bdev subsystem init successfully 00:07:51.573 created a nvmf target service 00:07:51.573 create targets's poll groups done 00:07:51.573 all subsystems of target started 00:07:51.573 nvmf target is running 00:07:51.573 all subsystems of target stopped 00:07:51.573 destroy targets's poll groups done 00:07:51.573 destroyed the nvmf target service 00:07:51.573 bdev subsystem finish successfully 00:07:51.573 nvmf threads destroy successfully 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.573 20:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.831 20:13:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.831 20:13:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:51.831 20:13:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.831 20:13:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:52.092 00:07:52.092 real 0m15.822s 00:07:52.092 user 0m44.820s 00:07:52.092 sys 0m3.268s 00:07:52.092 20:13:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.092 20:13:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:52.092 ************************************ 00:07:52.092 END TEST nvmf_example 00:07:52.092 ************************************ 00:07:52.092 20:13:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:52.092 20:13:30 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:52.092 20:13:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:52.092 20:13:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.092 20:13:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.092 ************************************ 00:07:52.092 START TEST nvmf_filesystem 00:07:52.092 ************************************ 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:52.092 * Looking for test storage... 00:07:52.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:52.092 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:52.093 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:52.093 #define SPDK_CONFIG_H 00:07:52.093 #define SPDK_CONFIG_APPS 1 00:07:52.093 #define SPDK_CONFIG_ARCH native 00:07:52.093 #undef SPDK_CONFIG_ASAN 00:07:52.093 #undef SPDK_CONFIG_AVAHI 00:07:52.093 #undef SPDK_CONFIG_CET 00:07:52.093 #define SPDK_CONFIG_COVERAGE 1 00:07:52.093 #define SPDK_CONFIG_CROSS_PREFIX 00:07:52.093 #undef SPDK_CONFIG_CRYPTO 00:07:52.093 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:52.093 #undef SPDK_CONFIG_CUSTOMOCF 00:07:52.093 #undef SPDK_CONFIG_DAOS 00:07:52.093 #define SPDK_CONFIG_DAOS_DIR 00:07:52.094 #define SPDK_CONFIG_DEBUG 1 00:07:52.094 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:52.094 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:52.094 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:52.094 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:52.094 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:52.094 #undef SPDK_CONFIG_DPDK_UADK 00:07:52.094 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:52.094 #define SPDK_CONFIG_EXAMPLES 1 00:07:52.094 #undef SPDK_CONFIG_FC 00:07:52.094 #define SPDK_CONFIG_FC_PATH 00:07:52.094 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:52.094 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:52.094 #undef SPDK_CONFIG_FUSE 00:07:52.094 #undef SPDK_CONFIG_FUZZER 00:07:52.094 #define SPDK_CONFIG_FUZZER_LIB 00:07:52.094 #undef SPDK_CONFIG_GOLANG 00:07:52.094 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:52.094 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:52.094 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:52.094 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:52.094 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:52.094 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:52.094 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:52.094 #define SPDK_CONFIG_IDXD 1 00:07:52.094 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:52.094 #undef SPDK_CONFIG_IPSEC_MB 00:07:52.094 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:52.094 #define SPDK_CONFIG_ISAL 1 00:07:52.094 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:52.094 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:52.094 #define SPDK_CONFIG_LIBDIR 00:07:52.094 #undef SPDK_CONFIG_LTO 00:07:52.094 #define SPDK_CONFIG_MAX_LCORES 128 00:07:52.094 #define SPDK_CONFIG_NVME_CUSE 1 00:07:52.094 #undef SPDK_CONFIG_OCF 00:07:52.094 #define SPDK_CONFIG_OCF_PATH 00:07:52.094 #define SPDK_CONFIG_OPENSSL_PATH 00:07:52.094 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:52.094 #define SPDK_CONFIG_PGO_DIR 00:07:52.094 #undef SPDK_CONFIG_PGO_USE 00:07:52.094 #define SPDK_CONFIG_PREFIX /usr/local 00:07:52.094 #undef SPDK_CONFIG_RAID5F 00:07:52.094 #undef SPDK_CONFIG_RBD 00:07:52.094 #define SPDK_CONFIG_RDMA 1 00:07:52.094 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:52.094 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:52.094 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:52.094 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:52.094 #define SPDK_CONFIG_SHARED 1 00:07:52.094 #undef SPDK_CONFIG_SMA 00:07:52.094 #define SPDK_CONFIG_TESTS 1 00:07:52.094 #undef SPDK_CONFIG_TSAN 00:07:52.094 #define SPDK_CONFIG_UBLK 1 00:07:52.094 #define SPDK_CONFIG_UBSAN 1 00:07:52.094 #undef SPDK_CONFIG_UNIT_TESTS 00:07:52.094 #undef SPDK_CONFIG_URING 00:07:52.094 #define SPDK_CONFIG_URING_PATH 00:07:52.094 #undef SPDK_CONFIG_URING_ZNS 00:07:52.094 #undef SPDK_CONFIG_USDT 00:07:52.094 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:52.094 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:52.094 #define SPDK_CONFIG_VFIO_USER 1 00:07:52.094 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:52.094 #define SPDK_CONFIG_VHOST 1 00:07:52.094 #define SPDK_CONFIG_VIRTIO 1 00:07:52.094 #undef SPDK_CONFIG_VTUNE 00:07:52.094 #define SPDK_CONFIG_VTUNE_DIR 00:07:52.094 #define SPDK_CONFIG_WERROR 1 00:07:52.094 #define SPDK_CONFIG_WPDK_DIR 00:07:52.094 #undef SPDK_CONFIG_XNVME 00:07:52.094 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:52.094 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:52.095 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:52.096 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3937603 ]] 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3937603 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.QoCbPY 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.QoCbPY/tests/target /tmp/spdk.QoCbPY 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=53452623872 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8542068736 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:52.097 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996144128 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1204224 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:52.098 * Looking for test storage... 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=53452623872 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10756661248 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.098 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:52.099 20:13:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:54.034 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:54.034 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.034 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:54.035 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:54.035 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.035 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.293 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.293 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.293 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:54.293 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.293 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:54.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:07:54.294 00:07:54.294 --- 10.0.0.2 ping statistics --- 00:07:54.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.294 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:07:54.294 00:07:54.294 --- 10.0.0.1 ping statistics --- 00:07:54.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.294 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.294 ************************************ 00:07:54.294 START TEST nvmf_filesystem_no_in_capsule 00:07:54.294 ************************************ 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3939225 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3939225 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3939225 ']' 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.294 20:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.294 [2024-07-15 20:13:32.764413] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:07:54.294 [2024-07-15 20:13:32.764504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.294 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.552 [2024-07-15 20:13:32.833381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.552 [2024-07-15 20:13:32.928087] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.552 [2024-07-15 20:13:32.928139] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.552 [2024-07-15 20:13:32.928165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.552 [2024-07-15 20:13:32.928179] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.552 [2024-07-15 20:13:32.928190] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.552 [2024-07-15 20:13:32.928266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.552 [2024-07-15 20:13:32.928317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.552 [2024-07-15 20:13:32.928434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.552 [2024-07-15 20:13:32.928437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.552 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.552 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:54.552 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.552 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.552 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.810 [2024-07-15 20:13:33.091806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.810 Malloc1 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.810 [2024-07-15 20:13:33.284217] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.810 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:54.810 { 00:07:54.810 "name": "Malloc1", 00:07:54.810 "aliases": [ 00:07:54.810 "d0817749-eebf-4fe5-a87b-dc17adf413d6" 00:07:54.810 ], 00:07:54.810 "product_name": "Malloc disk", 00:07:54.810 "block_size": 512, 00:07:54.810 "num_blocks": 1048576, 00:07:54.810 "uuid": "d0817749-eebf-4fe5-a87b-dc17adf413d6", 00:07:54.810 "assigned_rate_limits": { 00:07:54.810 "rw_ios_per_sec": 0, 00:07:54.810 "rw_mbytes_per_sec": 0, 00:07:54.810 "r_mbytes_per_sec": 0, 00:07:54.810 "w_mbytes_per_sec": 0 00:07:54.810 }, 00:07:54.810 "claimed": true, 00:07:54.810 "claim_type": "exclusive_write", 00:07:54.810 "zoned": false, 00:07:54.810 "supported_io_types": { 00:07:54.810 "read": true, 00:07:54.810 "write": true, 00:07:54.810 "unmap": true, 00:07:54.810 "flush": true, 00:07:54.810 "reset": true, 00:07:54.810 "nvme_admin": false, 00:07:54.810 "nvme_io": false, 00:07:54.810 "nvme_io_md": false, 00:07:54.810 "write_zeroes": true, 00:07:54.810 "zcopy": true, 00:07:54.810 "get_zone_info": false, 00:07:54.810 "zone_management": false, 00:07:54.810 "zone_append": false, 00:07:54.810 "compare": false, 00:07:54.810 "compare_and_write": false, 00:07:54.810 "abort": true, 00:07:54.811 "seek_hole": false, 00:07:54.811 "seek_data": false, 00:07:54.811 "copy": true, 00:07:54.811 "nvme_iov_md": false 00:07:54.811 }, 00:07:54.811 "memory_domains": [ 00:07:54.811 { 00:07:54.811 "dma_device_id": "system", 00:07:54.811 "dma_device_type": 1 00:07:54.811 }, 00:07:54.811 { 00:07:54.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.811 "dma_device_type": 2 00:07:54.811 } 00:07:54.811 ], 00:07:54.811 "driver_specific": {} 00:07:54.811 } 00:07:54.811 ]' 00:07:54.811 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:55.068 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:55.068 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:55.068 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:55.068 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:55.068 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:55.068 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:55.068 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.633 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.633 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:55.633 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.633 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:55.633 20:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:57.529 20:13:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:57.530 20:13:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:57.530 20:13:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.530 20:13:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:57.530 20:13:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.530 20:13:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:57.530 20:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:57.530 20:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:57.530 20:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:57.530 20:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:57.530 20:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:57.530 20:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:57.530 20:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:57.530 20:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:57.530 20:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:57.530 20:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:57.530 20:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:58.095 20:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:58.353 20:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.286 ************************************ 00:07:59.286 START TEST filesystem_ext4 00:07:59.286 ************************************ 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:59.286 20:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:59.286 mke2fs 1.46.5 (30-Dec-2021) 00:07:59.543 Discarding device blocks: 0/522240 done 00:07:59.543 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:59.543 Filesystem UUID: 967fdd49-0960-4972-b459-fd58de4f2e73 00:07:59.543 Superblock backups stored on blocks: 00:07:59.543 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:59.543 00:07:59.543 Allocating group tables: 0/64 done 00:07:59.543 Writing inode tables: 0/64 done 00:08:02.815 Creating journal (8192 blocks): done 00:08:03.380 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:08:03.380 00:08:03.380 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:03.380 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.380 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.638 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:03.638 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.638 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:03.638 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:03.638 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.638 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3939225 00:08:03.638 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.638 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.638 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.638 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.638 00:08:03.638 real 0m4.225s 00:08:03.638 user 0m0.024s 00:08:03.638 sys 0m0.050s 00:08:03.638 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.638 20:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:03.638 ************************************ 00:08:03.638 END TEST filesystem_ext4 00:08:03.638 ************************************ 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.638 ************************************ 00:08:03.638 START TEST filesystem_btrfs 00:08:03.638 ************************************ 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:03.638 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:03.896 btrfs-progs v6.6.2 00:08:03.896 See https://btrfs.readthedocs.io for more information. 00:08:03.896 00:08:03.896 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:03.896 NOTE: several default settings have changed in version 5.15, please make sure 00:08:03.896 this does not affect your deployments: 00:08:03.896 - DUP for metadata (-m dup) 00:08:03.896 - enabled no-holes (-O no-holes) 00:08:03.896 - enabled free-space-tree (-R free-space-tree) 00:08:03.896 00:08:03.896 Label: (null) 00:08:03.896 UUID: 327204ae-2a28-40d3-b912-4ce4bb5edd39 00:08:03.896 Node size: 16384 00:08:03.896 Sector size: 4096 00:08:03.896 Filesystem size: 510.00MiB 00:08:03.896 Block group profiles: 00:08:03.896 Data: single 8.00MiB 00:08:03.896 Metadata: DUP 32.00MiB 00:08:03.896 System: DUP 8.00MiB 00:08:03.896 SSD detected: yes 00:08:03.896 Zoned device: no 00:08:03.896 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:03.896 Runtime features: free-space-tree 00:08:03.896 Checksum: crc32c 00:08:03.896 Number of devices: 1 00:08:03.896 Devices: 00:08:03.896 ID SIZE PATH 00:08:03.896 1 510.00MiB /dev/nvme0n1p1 00:08:03.896 00:08:03.896 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:03.896 20:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3939225 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.830 00:08:04.830 real 0m1.018s 00:08:04.830 user 0m0.023s 00:08:04.830 sys 0m0.099s 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:04.830 ************************************ 00:08:04.830 END TEST filesystem_btrfs 00:08:04.830 ************************************ 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.830 ************************************ 00:08:04.830 START TEST filesystem_xfs 00:08:04.830 ************************************ 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:04.830 20:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:04.830 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:04.830 = sectsz=512 attr=2, projid32bit=1 00:08:04.830 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:04.830 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:04.830 data = bsize=4096 blocks=130560, imaxpct=25 00:08:04.831 = sunit=0 swidth=0 blks 00:08:04.831 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:04.831 log =internal log bsize=4096 blocks=16384, version=2 00:08:04.831 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:04.831 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:05.762 Discarding blocks...Done. 00:08:05.762 20:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:05.762 20:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.287 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.287 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:08.287 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.287 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:08.287 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:08.287 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.287 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3939225 00:08:08.287 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.287 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.287 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.287 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.287 00:08:08.287 real 0m3.720s 00:08:08.287 user 0m0.014s 00:08:08.287 sys 0m0.061s 00:08:08.287 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.287 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:08.287 ************************************ 00:08:08.287 END TEST filesystem_xfs 00:08:08.287 ************************************ 00:08:08.545 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:08.545 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:08.545 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:08.545 20:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:08.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3939225 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3939225 ']' 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3939225 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.545 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3939225 00:08:08.803 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.803 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.803 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3939225' 00:08:08.803 killing process with pid 3939225 00:08:08.803 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3939225 00:08:08.803 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3939225 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:09.063 00:08:09.063 real 0m14.799s 00:08:09.063 user 0m57.025s 00:08:09.063 sys 0m1.956s 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.063 ************************************ 00:08:09.063 END TEST nvmf_filesystem_no_in_capsule 00:08:09.063 ************************************ 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.063 ************************************ 00:08:09.063 START TEST nvmf_filesystem_in_capsule 00:08:09.063 ************************************ 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3941192 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3941192 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3941192 ']' 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.063 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.322 [2024-07-15 20:13:47.620845] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:08:09.322 [2024-07-15 20:13:47.620960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.322 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.322 [2024-07-15 20:13:47.689259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.322 [2024-07-15 20:13:47.778284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.322 [2024-07-15 20:13:47.778345] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.322 [2024-07-15 20:13:47.778374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.322 [2024-07-15 20:13:47.778387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.322 [2024-07-15 20:13:47.778398] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.322 [2024-07-15 20:13:47.778485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.322 [2024-07-15 20:13:47.778557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.322 [2024-07-15 20:13:47.778648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.322 [2024-07-15 20:13:47.778650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.581 [2024-07-15 20:13:47.927476] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.581 20:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.581 Malloc1 00:08:09.581 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.581 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:09.581 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.581 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.581 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.581 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:09.581 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.581 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.581 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.581 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.581 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.581 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.581 [2024-07-15 20:13:48.112179] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:09.839 { 00:08:09.839 "name": "Malloc1", 00:08:09.839 "aliases": [ 00:08:09.839 "9c1ce80e-f30f-431e-af69-e0f8faf47a5a" 00:08:09.839 ], 00:08:09.839 "product_name": "Malloc disk", 00:08:09.839 "block_size": 512, 00:08:09.839 "num_blocks": 1048576, 00:08:09.839 "uuid": "9c1ce80e-f30f-431e-af69-e0f8faf47a5a", 00:08:09.839 "assigned_rate_limits": { 00:08:09.839 "rw_ios_per_sec": 0, 00:08:09.839 "rw_mbytes_per_sec": 0, 00:08:09.839 "r_mbytes_per_sec": 0, 00:08:09.839 "w_mbytes_per_sec": 0 00:08:09.839 }, 00:08:09.839 "claimed": true, 00:08:09.839 "claim_type": "exclusive_write", 00:08:09.839 "zoned": false, 00:08:09.839 "supported_io_types": { 00:08:09.839 "read": true, 00:08:09.839 "write": true, 00:08:09.839 "unmap": true, 00:08:09.839 "flush": true, 00:08:09.839 "reset": true, 00:08:09.839 "nvme_admin": false, 00:08:09.839 "nvme_io": false, 00:08:09.839 "nvme_io_md": false, 00:08:09.839 "write_zeroes": true, 00:08:09.839 "zcopy": true, 00:08:09.839 "get_zone_info": false, 00:08:09.839 "zone_management": false, 00:08:09.839 "zone_append": false, 00:08:09.839 "compare": false, 00:08:09.839 "compare_and_write": false, 00:08:09.839 "abort": true, 00:08:09.839 "seek_hole": false, 00:08:09.839 "seek_data": false, 00:08:09.839 "copy": true, 00:08:09.839 "nvme_iov_md": false 00:08:09.839 }, 00:08:09.839 "memory_domains": [ 00:08:09.839 { 00:08:09.839 "dma_device_id": "system", 00:08:09.839 "dma_device_type": 1 00:08:09.839 }, 00:08:09.839 { 00:08:09.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.839 "dma_device_type": 2 00:08:09.839 } 00:08:09.839 ], 00:08:09.839 "driver_specific": {} 00:08:09.839 } 00:08:09.839 ]' 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:09.839 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.405 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:10.405 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:10.405 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:10.405 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:10.405 20:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:12.965 20:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:12.965 20:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:13.223 20:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.155 ************************************ 00:08:14.155 START TEST filesystem_in_capsule_ext4 00:08:14.155 ************************************ 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:14.155 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:14.155 mke2fs 1.46.5 (30-Dec-2021) 00:08:14.155 Discarding device blocks: 0/522240 done 00:08:14.155 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:14.155 Filesystem UUID: 07edbadf-d260-4d52-8f71-ef414f8b8b52 00:08:14.155 Superblock backups stored on blocks: 00:08:14.155 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:14.155 00:08:14.155 Allocating group tables: 0/64 done 00:08:14.155 Writing inode tables: 0/64 done 00:08:14.411 Creating journal (8192 blocks): done 00:08:14.411 Writing superblocks and filesystem accounting information: 0/64 done 00:08:14.411 00:08:14.411 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:14.411 20:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3941192 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:14.975 00:08:14.975 real 0m0.858s 00:08:14.975 user 0m0.018s 00:08:14.975 sys 0m0.059s 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:14.975 ************************************ 00:08:14.975 END TEST filesystem_in_capsule_ext4 00:08:14.975 ************************************ 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.975 ************************************ 00:08:14.975 START TEST filesystem_in_capsule_btrfs 00:08:14.975 ************************************ 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:14.975 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:15.233 btrfs-progs v6.6.2 00:08:15.233 See https://btrfs.readthedocs.io for more information. 00:08:15.233 00:08:15.233 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:15.233 NOTE: several default settings have changed in version 5.15, please make sure 00:08:15.233 this does not affect your deployments: 00:08:15.233 - DUP for metadata (-m dup) 00:08:15.233 - enabled no-holes (-O no-holes) 00:08:15.233 - enabled free-space-tree (-R free-space-tree) 00:08:15.233 00:08:15.233 Label: (null) 00:08:15.233 UUID: 6bd27266-4da6-4146-baca-6354532be529 00:08:15.233 Node size: 16384 00:08:15.233 Sector size: 4096 00:08:15.233 Filesystem size: 510.00MiB 00:08:15.233 Block group profiles: 00:08:15.233 Data: single 8.00MiB 00:08:15.233 Metadata: DUP 32.00MiB 00:08:15.233 System: DUP 8.00MiB 00:08:15.233 SSD detected: yes 00:08:15.233 Zoned device: no 00:08:15.233 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:15.233 Runtime features: free-space-tree 00:08:15.233 Checksum: crc32c 00:08:15.233 Number of devices: 1 00:08:15.233 Devices: 00:08:15.233 ID SIZE PATH 00:08:15.233 1 510.00MiB /dev/nvme0n1p1 00:08:15.233 00:08:15.490 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:15.490 20:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:16.054 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:16.054 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:16.054 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:16.054 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:16.054 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:16.054 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3941192 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:16.312 00:08:16.312 real 0m1.128s 00:08:16.312 user 0m0.020s 00:08:16.312 sys 0m0.115s 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:16.312 ************************************ 00:08:16.312 END TEST filesystem_in_capsule_btrfs 00:08:16.312 ************************************ 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.312 ************************************ 00:08:16.312 START TEST filesystem_in_capsule_xfs 00:08:16.312 ************************************ 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:16.312 20:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:16.312 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:16.312 = sectsz=512 attr=2, projid32bit=1 00:08:16.312 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:16.312 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:16.312 data = bsize=4096 blocks=130560, imaxpct=25 00:08:16.312 = sunit=0 swidth=0 blks 00:08:16.312 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:16.312 log =internal log bsize=4096 blocks=16384, version=2 00:08:16.312 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:16.312 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:17.241 Discarding blocks...Done. 00:08:17.241 20:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:17.241 20:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.758 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3941192 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.759 00:08:19.759 real 0m3.195s 00:08:19.759 user 0m0.022s 00:08:19.759 sys 0m0.054s 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:19.759 ************************************ 00:08:19.759 END TEST filesystem_in_capsule_xfs 00:08:19.759 ************************************ 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:19.759 20:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:19.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3941192 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3941192 ']' 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3941192 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3941192 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3941192' 00:08:19.759 killing process with pid 3941192 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3941192 00:08:19.759 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3941192 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:20.323 00:08:20.323 real 0m11.149s 00:08:20.323 user 0m42.781s 00:08:20.323 sys 0m1.703s 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.323 ************************************ 00:08:20.323 END TEST nvmf_filesystem_in_capsule 00:08:20.323 ************************************ 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.323 rmmod nvme_tcp 00:08:20.323 rmmod nvme_fabrics 00:08:20.323 rmmod nvme_keyring 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.323 20:13:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.853 20:14:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:22.853 00:08:22.853 real 0m30.421s 00:08:22.853 user 1m40.703s 00:08:22.853 sys 0m5.237s 00:08:22.853 20:14:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.853 20:14:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.853 ************************************ 00:08:22.853 END TEST nvmf_filesystem 00:08:22.853 ************************************ 00:08:22.853 20:14:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:22.853 20:14:00 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:22.853 20:14:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:22.853 20:14:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.853 20:14:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.853 ************************************ 00:08:22.853 START TEST nvmf_target_discovery 00:08:22.853 ************************************ 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:22.853 * Looking for test storage... 00:08:22.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.853 20:14:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:24.755 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:24.755 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:24.755 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:24.755 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:24.755 20:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:24.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:08:24.755 00:08:24.755 --- 10.0.0.2 ping statistics --- 00:08:24.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.755 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:08:24.755 00:08:24.755 --- 10.0.0.1 ping statistics --- 00:08:24.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.755 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:24.755 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:24.756 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.756 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3944541 00:08:24.756 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.756 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3944541 00:08:24.756 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3944541 ']' 00:08:24.756 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.756 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.756 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.756 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.756 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.756 [2024-07-15 20:14:03.112826] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:08:24.756 [2024-07-15 20:14:03.112930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.756 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.756 [2024-07-15 20:14:03.177512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.756 [2024-07-15 20:14:03.266674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.756 [2024-07-15 20:14:03.266738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.756 [2024-07-15 20:14:03.266751] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.756 [2024-07-15 20:14:03.266762] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.756 [2024-07-15 20:14:03.266771] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.756 [2024-07-15 20:14:03.266853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.756 [2024-07-15 20:14:03.266921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.756 [2024-07-15 20:14:03.266985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.756 [2024-07-15 20:14:03.266987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.013 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.013 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:25.013 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:25.013 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.013 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.013 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 [2024-07-15 20:14:03.411603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 Null1 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 [2024-07-15 20:14:03.451916] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 Null2 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 Null3 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 Null4 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.014 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:25.271 00:08:25.271 Discovery Log Number of Records 6, Generation counter 6 00:08:25.271 =====Discovery Log Entry 0====== 00:08:25.271 trtype: tcp 00:08:25.271 adrfam: ipv4 00:08:25.271 subtype: current discovery subsystem 00:08:25.271 treq: not required 00:08:25.271 portid: 0 00:08:25.271 trsvcid: 4420 00:08:25.271 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:25.271 traddr: 10.0.0.2 00:08:25.271 eflags: explicit discovery connections, duplicate discovery information 00:08:25.271 sectype: none 00:08:25.271 =====Discovery Log Entry 1====== 00:08:25.271 trtype: tcp 00:08:25.271 adrfam: ipv4 00:08:25.271 subtype: nvme subsystem 00:08:25.271 treq: not required 00:08:25.271 portid: 0 00:08:25.271 trsvcid: 4420 00:08:25.271 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:25.271 traddr: 10.0.0.2 00:08:25.271 eflags: none 00:08:25.271 sectype: none 00:08:25.271 =====Discovery Log Entry 2====== 00:08:25.271 trtype: tcp 00:08:25.271 adrfam: ipv4 00:08:25.271 subtype: nvme subsystem 00:08:25.271 treq: not required 00:08:25.271 portid: 0 00:08:25.271 trsvcid: 4420 00:08:25.271 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:25.271 traddr: 10.0.0.2 00:08:25.271 eflags: none 00:08:25.271 sectype: none 00:08:25.271 =====Discovery Log Entry 3====== 00:08:25.271 trtype: tcp 00:08:25.271 adrfam: ipv4 00:08:25.271 subtype: nvme subsystem 00:08:25.271 treq: not required 00:08:25.271 portid: 0 00:08:25.271 trsvcid: 4420 00:08:25.271 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:25.271 traddr: 10.0.0.2 00:08:25.271 eflags: none 00:08:25.271 sectype: none 00:08:25.271 =====Discovery Log Entry 4====== 00:08:25.271 trtype: tcp 00:08:25.271 adrfam: ipv4 00:08:25.271 subtype: nvme subsystem 00:08:25.271 treq: not required 00:08:25.271 portid: 0 00:08:25.271 trsvcid: 4420 00:08:25.271 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:25.271 traddr: 10.0.0.2 00:08:25.271 eflags: none 00:08:25.271 sectype: none 00:08:25.271 =====Discovery Log Entry 5====== 00:08:25.271 trtype: tcp 00:08:25.271 adrfam: ipv4 00:08:25.271 subtype: discovery subsystem referral 00:08:25.271 treq: not required 00:08:25.271 portid: 0 00:08:25.271 trsvcid: 4430 00:08:25.271 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:25.271 traddr: 10.0.0.2 00:08:25.271 eflags: none 00:08:25.271 sectype: none 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:25.271 Perform nvmf subsystem discovery via RPC 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.271 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.272 [ 00:08:25.272 { 00:08:25.272 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:25.272 "subtype": "Discovery", 00:08:25.272 "listen_addresses": [ 00:08:25.272 { 00:08:25.272 "trtype": "TCP", 00:08:25.272 "adrfam": "IPv4", 00:08:25.272 "traddr": "10.0.0.2", 00:08:25.272 "trsvcid": "4420" 00:08:25.272 } 00:08:25.272 ], 00:08:25.272 "allow_any_host": true, 00:08:25.272 "hosts": [] 00:08:25.272 }, 00:08:25.272 { 00:08:25.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:25.272 "subtype": "NVMe", 00:08:25.272 "listen_addresses": [ 00:08:25.272 { 00:08:25.272 "trtype": "TCP", 00:08:25.272 "adrfam": "IPv4", 00:08:25.272 "traddr": "10.0.0.2", 00:08:25.272 "trsvcid": "4420" 00:08:25.272 } 00:08:25.272 ], 00:08:25.272 "allow_any_host": true, 00:08:25.272 "hosts": [], 00:08:25.272 "serial_number": "SPDK00000000000001", 00:08:25.272 "model_number": "SPDK bdev Controller", 00:08:25.272 "max_namespaces": 32, 00:08:25.272 "min_cntlid": 1, 00:08:25.272 "max_cntlid": 65519, 00:08:25.272 "namespaces": [ 00:08:25.272 { 00:08:25.272 "nsid": 1, 00:08:25.272 "bdev_name": "Null1", 00:08:25.272 "name": "Null1", 00:08:25.272 "nguid": "95AECF1733CD482A82005A7EF97B7E64", 00:08:25.272 "uuid": "95aecf17-33cd-482a-8200-5a7ef97b7e64" 00:08:25.272 } 00:08:25.272 ] 00:08:25.272 }, 00:08:25.272 { 00:08:25.272 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:25.272 "subtype": "NVMe", 00:08:25.272 "listen_addresses": [ 00:08:25.272 { 00:08:25.272 "trtype": "TCP", 00:08:25.272 "adrfam": "IPv4", 00:08:25.272 "traddr": "10.0.0.2", 00:08:25.272 "trsvcid": "4420" 00:08:25.272 } 00:08:25.272 ], 00:08:25.272 "allow_any_host": true, 00:08:25.272 "hosts": [], 00:08:25.272 "serial_number": "SPDK00000000000002", 00:08:25.272 "model_number": "SPDK bdev Controller", 00:08:25.272 "max_namespaces": 32, 00:08:25.272 "min_cntlid": 1, 00:08:25.272 "max_cntlid": 65519, 00:08:25.272 "namespaces": [ 00:08:25.272 { 00:08:25.272 "nsid": 1, 00:08:25.272 "bdev_name": "Null2", 00:08:25.272 "name": "Null2", 00:08:25.272 "nguid": "59B85B589A3B4C82876F65EAB479F074", 00:08:25.272 "uuid": "59b85b58-9a3b-4c82-876f-65eab479f074" 00:08:25.272 } 00:08:25.272 ] 00:08:25.272 }, 00:08:25.272 { 00:08:25.272 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:25.272 "subtype": "NVMe", 00:08:25.272 "listen_addresses": [ 00:08:25.272 { 00:08:25.272 "trtype": "TCP", 00:08:25.272 "adrfam": "IPv4", 00:08:25.272 "traddr": "10.0.0.2", 00:08:25.272 "trsvcid": "4420" 00:08:25.272 } 00:08:25.272 ], 00:08:25.272 "allow_any_host": true, 00:08:25.272 "hosts": [], 00:08:25.272 "serial_number": "SPDK00000000000003", 00:08:25.272 "model_number": "SPDK bdev Controller", 00:08:25.272 "max_namespaces": 32, 00:08:25.272 "min_cntlid": 1, 00:08:25.272 "max_cntlid": 65519, 00:08:25.272 "namespaces": [ 00:08:25.272 { 00:08:25.272 "nsid": 1, 00:08:25.272 "bdev_name": "Null3", 00:08:25.272 "name": "Null3", 00:08:25.272 "nguid": "07CF396B47D242E2B9A2807A1F507B70", 00:08:25.272 "uuid": "07cf396b-47d2-42e2-b9a2-807a1f507b70" 00:08:25.272 } 00:08:25.272 ] 00:08:25.272 }, 00:08:25.272 { 00:08:25.272 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:25.272 "subtype": "NVMe", 00:08:25.272 "listen_addresses": [ 00:08:25.272 { 00:08:25.272 "trtype": "TCP", 00:08:25.272 "adrfam": "IPv4", 00:08:25.272 "traddr": "10.0.0.2", 00:08:25.272 "trsvcid": "4420" 00:08:25.272 } 00:08:25.272 ], 00:08:25.272 "allow_any_host": true, 00:08:25.272 "hosts": [], 00:08:25.272 "serial_number": "SPDK00000000000004", 00:08:25.272 "model_number": "SPDK bdev Controller", 00:08:25.272 "max_namespaces": 32, 00:08:25.272 "min_cntlid": 1, 00:08:25.272 "max_cntlid": 65519, 00:08:25.272 "namespaces": [ 00:08:25.272 { 00:08:25.272 "nsid": 1, 00:08:25.272 "bdev_name": "Null4", 00:08:25.272 "name": "Null4", 00:08:25.272 "nguid": "5A0E8033556F443CA885379DD0072684", 00:08:25.272 "uuid": "5a0e8033-556f-443c-a885-379dd0072684" 00:08:25.272 } 00:08:25.272 ] 00:08:25.272 } 00:08:25.272 ] 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:25.272 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:25.272 rmmod nvme_tcp 00:08:25.272 rmmod nvme_fabrics 00:08:25.530 rmmod nvme_keyring 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3944541 ']' 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3944541 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3944541 ']' 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3944541 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3944541 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3944541' 00:08:25.530 killing process with pid 3944541 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3944541 00:08:25.530 20:14:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3944541 00:08:25.788 20:14:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:25.788 20:14:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:25.788 20:14:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:25.788 20:14:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:25.788 20:14:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:25.788 20:14:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.788 20:14:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.788 20:14:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.692 20:14:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:27.692 00:08:27.692 real 0m5.253s 00:08:27.692 user 0m4.072s 00:08:27.692 sys 0m1.748s 00:08:27.692 20:14:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.693 20:14:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.693 ************************************ 00:08:27.693 END TEST nvmf_target_discovery 00:08:27.693 ************************************ 00:08:27.693 20:14:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:27.693 20:14:06 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:27.693 20:14:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:27.693 20:14:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.693 20:14:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.693 ************************************ 00:08:27.693 START TEST nvmf_referrals 00:08:27.693 ************************************ 00:08:27.693 20:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:27.951 * Looking for test storage... 00:08:27.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.951 20:14:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:27.952 20:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:29.854 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:29.854 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:29.854 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:29.854 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:29.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:08:29.854 00:08:29.854 --- 10.0.0.2 ping statistics --- 00:08:29.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.854 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:08:29.854 00:08:29.854 --- 10.0.0.1 ping statistics --- 00:08:29.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.854 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:29.854 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.855 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:29.855 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.855 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.855 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.855 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.855 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.855 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.855 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.155 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:30.155 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.155 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.155 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.155 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3946639 00:08:30.155 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.155 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3946639 00:08:30.155 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3946639 ']' 00:08:30.155 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.155 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.155 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.155 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.155 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.155 [2024-07-15 20:14:08.447607] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:08:30.155 [2024-07-15 20:14:08.447707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.155 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.155 [2024-07-15 20:14:08.517175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.155 [2024-07-15 20:14:08.611308] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.155 [2024-07-15 20:14:08.611369] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.155 [2024-07-15 20:14:08.611395] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.155 [2024-07-15 20:14:08.611410] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.155 [2024-07-15 20:14:08.611422] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.155 [2024-07-15 20:14:08.611506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.155 [2024-07-15 20:14:08.611562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.155 [2024-07-15 20:14:08.611615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.155 [2024-07-15 20:14:08.611618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.412 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.412 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:30.412 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.412 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:30.412 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.412 20:14:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.412 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:30.412 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.412 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.413 [2024-07-15 20:14:08.773906] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.413 [2024-07-15 20:14:08.786195] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:30.413 20:14:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:30.670 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:30.670 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:30.670 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:30.671 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.927 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.183 20:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.184 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:31.184 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:31.184 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:31.184 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:31.184 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:31.184 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.184 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:31.184 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:31.440 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:31.440 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:31.440 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:31.440 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:31.440 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:31.440 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.440 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:31.440 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:31.440 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:31.440 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:31.440 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:31.440 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.440 20:14:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:31.697 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:31.955 rmmod nvme_tcp 00:08:31.955 rmmod nvme_fabrics 00:08:31.955 rmmod nvme_keyring 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3946639 ']' 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3946639 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3946639 ']' 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3946639 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3946639 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3946639' 00:08:31.955 killing process with pid 3946639 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3946639 00:08:31.955 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3946639 00:08:32.214 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:32.214 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:32.214 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:32.214 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.214 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:32.214 20:14:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.214 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.214 20:14:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.121 20:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:34.121 00:08:34.121 real 0m6.378s 00:08:34.121 user 0m8.962s 00:08:34.121 sys 0m2.135s 00:08:34.121 20:14:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.121 20:14:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:34.121 ************************************ 00:08:34.121 END TEST nvmf_referrals 00:08:34.121 ************************************ 00:08:34.121 20:14:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:34.121 20:14:12 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:34.121 20:14:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:34.121 20:14:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.121 20:14:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:34.121 ************************************ 00:08:34.121 START TEST nvmf_connect_disconnect 00:08:34.121 ************************************ 00:08:34.121 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:34.380 * Looking for test storage... 00:08:34.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.380 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:34.381 20:14:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:36.284 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:36.285 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:36.285 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:36.285 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:36.285 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.285 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:36.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:08:36.544 00:08:36.544 --- 10.0.0.2 ping statistics --- 00:08:36.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.544 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:08:36.544 00:08:36.544 --- 10.0.0.1 ping statistics --- 00:08:36.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.544 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3948924 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3948924 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3948924 ']' 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.544 20:14:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.544 [2024-07-15 20:14:14.903783] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:08:36.544 [2024-07-15 20:14:14.903886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.544 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.544 [2024-07-15 20:14:14.968154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.544 [2024-07-15 20:14:15.057467] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.544 [2024-07-15 20:14:15.057520] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.544 [2024-07-15 20:14:15.057534] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.544 [2024-07-15 20:14:15.057545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.544 [2024-07-15 20:14:15.057555] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.544 [2024-07-15 20:14:15.057606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.544 [2024-07-15 20:14:15.057645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.544 [2024-07-15 20:14:15.057703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.544 [2024-07-15 20:14:15.057705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.803 [2024-07-15 20:14:15.211760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.803 [2024-07-15 20:14:15.269262] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:36.803 20:14:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:39.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.720 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:27.720 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:27.720 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.720 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:27.721 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:27.721 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:27.721 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.721 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:27.721 rmmod nvme_tcp 00:12:27.721 rmmod nvme_fabrics 00:12:27.721 rmmod nvme_keyring 00:12:27.721 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.721 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:27.721 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:27.721 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3948924 ']' 00:12:27.721 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3948924 00:12:27.721 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3948924 ']' 00:12:27.721 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3948924 00:12:27.721 20:18:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:27.721 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:27.721 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3948924 00:12:27.721 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:27.721 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:27.721 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3948924' 00:12:27.721 killing process with pid 3948924 00:12:27.721 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3948924 00:12:27.721 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3948924 00:12:27.980 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.980 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:27.980 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:27.980 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.980 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:27.980 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.980 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.980 20:18:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.881 20:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:29.881 00:12:29.881 real 3m55.696s 00:12:29.881 user 14m58.094s 00:12:29.881 sys 0m34.182s 00:12:29.881 20:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.881 20:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.881 ************************************ 00:12:29.881 END TEST nvmf_connect_disconnect 00:12:29.881 ************************************ 00:12:29.881 20:18:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:29.881 20:18:08 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:29.881 20:18:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:29.881 20:18:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.881 20:18:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.881 ************************************ 00:12:29.881 START TEST nvmf_multitarget 00:12:29.881 ************************************ 00:12:29.881 20:18:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:29.881 * Looking for test storage... 00:12:30.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.139 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:30.140 20:18:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:32.037 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:32.037 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:32.037 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:32.037 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.037 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:32.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:12:32.038 00:12:32.038 --- 10.0.0.2 ping statistics --- 00:12:32.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.038 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:12:32.038 00:12:32.038 --- 10.0.0.1 ping statistics --- 00:12:32.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.038 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3980513 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3980513 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3980513 ']' 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.038 20:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:32.296 [2024-07-15 20:18:10.589593] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:12:32.296 [2024-07-15 20:18:10.589676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.296 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.296 [2024-07-15 20:18:10.663279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.296 [2024-07-15 20:18:10.759180] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.296 [2024-07-15 20:18:10.759239] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.296 [2024-07-15 20:18:10.759255] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.296 [2024-07-15 20:18:10.759269] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.296 [2024-07-15 20:18:10.759289] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.296 [2024-07-15 20:18:10.759351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.296 [2024-07-15 20:18:10.759405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.296 [2024-07-15 20:18:10.759519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.296 [2024-07-15 20:18:10.759522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.554 20:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.554 20:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:32.554 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:32.554 20:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:32.554 20:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:32.554 20:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.554 20:18:10 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:32.554 20:18:10 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:32.554 20:18:10 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:32.554 20:18:10 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:32.554 20:18:10 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:32.811 "nvmf_tgt_1" 00:12:32.811 20:18:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:32.811 "nvmf_tgt_2" 00:12:32.811 20:18:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:32.811 20:18:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:32.811 20:18:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:32.811 20:18:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:33.069 true 00:12:33.069 20:18:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:33.069 true 00:12:33.069 20:18:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:33.069 20:18:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:33.326 rmmod nvme_tcp 00:12:33.326 rmmod nvme_fabrics 00:12:33.326 rmmod nvme_keyring 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3980513 ']' 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3980513 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3980513 ']' 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3980513 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3980513 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3980513' 00:12:33.326 killing process with pid 3980513 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3980513 00:12:33.326 20:18:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3980513 00:12:33.584 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:33.584 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:33.584 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:33.584 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.584 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.584 20:18:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.584 20:18:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.584 20:18:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.485 20:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:35.485 00:12:35.485 real 0m5.634s 00:12:35.485 user 0m6.169s 00:12:35.485 sys 0m1.888s 00:12:35.485 20:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.485 20:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.485 ************************************ 00:12:35.485 END TEST nvmf_multitarget 00:12:35.485 ************************************ 00:12:35.743 20:18:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:35.743 20:18:14 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:35.743 20:18:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:35.743 20:18:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.743 20:18:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:35.743 ************************************ 00:12:35.743 START TEST nvmf_rpc 00:12:35.743 ************************************ 00:12:35.743 20:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:35.743 * Looking for test storage... 00:12:35.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:35.744 20:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:37.660 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:37.660 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:37.660 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:37.660 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.660 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.955 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.955 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.955 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:37.955 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.955 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.955 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.955 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:37.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:12:37.955 00:12:37.955 --- 10.0.0.2 ping statistics --- 00:12:37.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.956 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:12:37.956 00:12:37.956 --- 10.0.0.1 ping statistics --- 00:12:37.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.956 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3982614 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3982614 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3982614 ']' 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.956 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.956 [2024-07-15 20:18:16.334545] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:12:37.956 [2024-07-15 20:18:16.334631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.956 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.956 [2024-07-15 20:18:16.413883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.213 [2024-07-15 20:18:16.509913] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.213 [2024-07-15 20:18:16.509969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.213 [2024-07-15 20:18:16.509994] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.213 [2024-07-15 20:18:16.510016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.213 [2024-07-15 20:18:16.510028] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.213 [2024-07-15 20:18:16.510094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.213 [2024-07-15 20:18:16.510120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.213 [2024-07-15 20:18:16.510189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.213 [2024-07-15 20:18:16.510192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:38.213 "tick_rate": 2700000000, 00:12:38.213 "poll_groups": [ 00:12:38.213 { 00:12:38.213 "name": "nvmf_tgt_poll_group_000", 00:12:38.213 "admin_qpairs": 0, 00:12:38.213 "io_qpairs": 0, 00:12:38.213 "current_admin_qpairs": 0, 00:12:38.213 "current_io_qpairs": 0, 00:12:38.213 "pending_bdev_io": 0, 00:12:38.213 "completed_nvme_io": 0, 00:12:38.213 "transports": [] 00:12:38.213 }, 00:12:38.213 { 00:12:38.213 "name": "nvmf_tgt_poll_group_001", 00:12:38.213 "admin_qpairs": 0, 00:12:38.213 "io_qpairs": 0, 00:12:38.213 "current_admin_qpairs": 0, 00:12:38.213 "current_io_qpairs": 0, 00:12:38.213 "pending_bdev_io": 0, 00:12:38.213 "completed_nvme_io": 0, 00:12:38.213 "transports": [] 00:12:38.213 }, 00:12:38.213 { 00:12:38.213 "name": "nvmf_tgt_poll_group_002", 00:12:38.213 "admin_qpairs": 0, 00:12:38.213 "io_qpairs": 0, 00:12:38.213 "current_admin_qpairs": 0, 00:12:38.213 "current_io_qpairs": 0, 00:12:38.213 "pending_bdev_io": 0, 00:12:38.213 "completed_nvme_io": 0, 00:12:38.213 "transports": [] 00:12:38.213 }, 00:12:38.213 { 00:12:38.213 "name": "nvmf_tgt_poll_group_003", 00:12:38.213 "admin_qpairs": 0, 00:12:38.213 "io_qpairs": 0, 00:12:38.213 "current_admin_qpairs": 0, 00:12:38.213 "current_io_qpairs": 0, 00:12:38.213 "pending_bdev_io": 0, 00:12:38.213 "completed_nvme_io": 0, 00:12:38.213 "transports": [] 00:12:38.213 } 00:12:38.213 ] 00:12:38.213 }' 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:38.213 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.471 [2024-07-15 20:18:16.764218] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:38.471 "tick_rate": 2700000000, 00:12:38.471 "poll_groups": [ 00:12:38.471 { 00:12:38.471 "name": "nvmf_tgt_poll_group_000", 00:12:38.471 "admin_qpairs": 0, 00:12:38.471 "io_qpairs": 0, 00:12:38.471 "current_admin_qpairs": 0, 00:12:38.471 "current_io_qpairs": 0, 00:12:38.471 "pending_bdev_io": 0, 00:12:38.471 "completed_nvme_io": 0, 00:12:38.471 "transports": [ 00:12:38.471 { 00:12:38.471 "trtype": "TCP" 00:12:38.471 } 00:12:38.471 ] 00:12:38.471 }, 00:12:38.471 { 00:12:38.471 "name": "nvmf_tgt_poll_group_001", 00:12:38.471 "admin_qpairs": 0, 00:12:38.471 "io_qpairs": 0, 00:12:38.471 "current_admin_qpairs": 0, 00:12:38.471 "current_io_qpairs": 0, 00:12:38.471 "pending_bdev_io": 0, 00:12:38.471 "completed_nvme_io": 0, 00:12:38.471 "transports": [ 00:12:38.471 { 00:12:38.471 "trtype": "TCP" 00:12:38.471 } 00:12:38.471 ] 00:12:38.471 }, 00:12:38.471 { 00:12:38.471 "name": "nvmf_tgt_poll_group_002", 00:12:38.471 "admin_qpairs": 0, 00:12:38.471 "io_qpairs": 0, 00:12:38.471 "current_admin_qpairs": 0, 00:12:38.471 "current_io_qpairs": 0, 00:12:38.471 "pending_bdev_io": 0, 00:12:38.471 "completed_nvme_io": 0, 00:12:38.471 "transports": [ 00:12:38.471 { 00:12:38.471 "trtype": "TCP" 00:12:38.471 } 00:12:38.471 ] 00:12:38.471 }, 00:12:38.471 { 00:12:38.471 "name": "nvmf_tgt_poll_group_003", 00:12:38.471 "admin_qpairs": 0, 00:12:38.471 "io_qpairs": 0, 00:12:38.471 "current_admin_qpairs": 0, 00:12:38.471 "current_io_qpairs": 0, 00:12:38.471 "pending_bdev_io": 0, 00:12:38.471 "completed_nvme_io": 0, 00:12:38.471 "transports": [ 00:12:38.471 { 00:12:38.471 "trtype": "TCP" 00:12:38.471 } 00:12:38.471 ] 00:12:38.471 } 00:12:38.471 ] 00:12:38.471 }' 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.471 Malloc1 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.471 [2024-07-15 20:18:16.915476] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:38.471 [2024-07-15 20:18:16.938021] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:38.471 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:38.471 could not add new controller: failed to write to nvme-fabrics device 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.471 20:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.411 20:18:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.411 20:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:39.411 20:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.411 20:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:39.411 20:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.304 [2024-07-15 20:18:19.757722] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:41.304 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:41.304 could not add new controller: failed to write to nvme-fabrics device 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.304 20:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.237 20:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.237 20:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:42.237 20:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.237 20:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:42.237 20:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.135 [2024-07-15 20:18:22.587575] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.135 20:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.067 20:18:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.067 20:18:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:45.067 20:18:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.068 20:18:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:45.068 20:18:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:46.965 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:46.965 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:46.965 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.965 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:46.965 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.965 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:46.965 20:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.965 20:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.965 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:46.965 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.966 [2024-07-15 20:18:25.422084] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.966 20:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.900 20:18:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.900 20:18:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:47.900 20:18:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.900 20:18:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:47.900 20:18:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.798 [2024-07-15 20:18:28.222435] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.798 20:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.364 20:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.364 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:50.364 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.364 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:50.364 20:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:52.890 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:52.890 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:52.890 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.890 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.891 [2024-07-15 20:18:30.950412] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.891 20:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.149 20:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.149 20:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.149 20:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.149 20:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:53.149 20:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:55.710 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.711 [2024-07-15 20:18:33.765398] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.711 20:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.969 20:18:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.969 20:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.969 20:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.969 20:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:55.969 20:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:57.872 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:57.872 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:57.872 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.872 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:57.872 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.872 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:57.872 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.128 [2024-07-15 20:18:36.533667] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.128 [2024-07-15 20:18:36.581731] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.128 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.129 [2024-07-15 20:18:36.629909] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.129 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.385 [2024-07-15 20:18:36.678081] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.385 [2024-07-15 20:18:36.726255] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.385 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:58.386 "tick_rate": 2700000000, 00:12:58.386 "poll_groups": [ 00:12:58.386 { 00:12:58.386 "name": "nvmf_tgt_poll_group_000", 00:12:58.386 "admin_qpairs": 2, 00:12:58.386 "io_qpairs": 84, 00:12:58.386 "current_admin_qpairs": 0, 00:12:58.386 "current_io_qpairs": 0, 00:12:58.386 "pending_bdev_io": 0, 00:12:58.386 "completed_nvme_io": 183, 00:12:58.386 "transports": [ 00:12:58.386 { 00:12:58.386 "trtype": "TCP" 00:12:58.386 } 00:12:58.386 ] 00:12:58.386 }, 00:12:58.386 { 00:12:58.386 "name": "nvmf_tgt_poll_group_001", 00:12:58.386 "admin_qpairs": 2, 00:12:58.386 "io_qpairs": 84, 00:12:58.386 "current_admin_qpairs": 0, 00:12:58.386 "current_io_qpairs": 0, 00:12:58.386 "pending_bdev_io": 0, 00:12:58.386 "completed_nvme_io": 184, 00:12:58.386 "transports": [ 00:12:58.386 { 00:12:58.386 "trtype": "TCP" 00:12:58.386 } 00:12:58.386 ] 00:12:58.386 }, 00:12:58.386 { 00:12:58.386 "name": "nvmf_tgt_poll_group_002", 00:12:58.386 "admin_qpairs": 1, 00:12:58.386 "io_qpairs": 84, 00:12:58.386 "current_admin_qpairs": 0, 00:12:58.386 "current_io_qpairs": 0, 00:12:58.386 "pending_bdev_io": 0, 00:12:58.386 "completed_nvme_io": 133, 00:12:58.386 "transports": [ 00:12:58.386 { 00:12:58.386 "trtype": "TCP" 00:12:58.386 } 00:12:58.386 ] 00:12:58.386 }, 00:12:58.386 { 00:12:58.386 "name": "nvmf_tgt_poll_group_003", 00:12:58.386 "admin_qpairs": 2, 00:12:58.386 "io_qpairs": 84, 00:12:58.386 "current_admin_qpairs": 0, 00:12:58.386 "current_io_qpairs": 0, 00:12:58.386 "pending_bdev_io": 0, 00:12:58.386 "completed_nvme_io": 186, 00:12:58.386 "transports": [ 00:12:58.386 { 00:12:58.386 "trtype": "TCP" 00:12:58.386 } 00:12:58.386 ] 00:12:58.386 } 00:12:58.386 ] 00:12:58.386 }' 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:58.386 20:18:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:58.386 rmmod nvme_tcp 00:12:58.386 rmmod nvme_fabrics 00:12:58.386 rmmod nvme_keyring 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3982614 ']' 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3982614 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3982614 ']' 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3982614 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3982614 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3982614' 00:12:58.642 killing process with pid 3982614 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3982614 00:12:58.642 20:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3982614 00:12:58.900 20:18:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:58.900 20:18:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:58.900 20:18:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:58.900 20:18:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.900 20:18:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:58.900 20:18:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.900 20:18:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.900 20:18:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.847 20:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.847 00:13:00.847 real 0m25.184s 00:13:00.847 user 1m21.929s 00:13:00.847 sys 0m4.094s 00:13:00.847 20:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:00.847 20:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.847 ************************************ 00:13:00.847 END TEST nvmf_rpc 00:13:00.847 ************************************ 00:13:00.847 20:18:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:00.847 20:18:39 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.847 20:18:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:00.847 20:18:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.847 20:18:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.847 ************************************ 00:13:00.847 START TEST nvmf_invalid 00:13:00.847 ************************************ 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.847 * Looking for test storage... 00:13:00.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.847 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.848 20:18:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:03.379 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:03.379 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:03.379 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:03.379 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:03.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:13:03.379 00:13:03.379 --- 10.0.0.2 ping statistics --- 00:13:03.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.379 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:13:03.379 00:13:03.379 --- 10.0.0.1 ping statistics --- 00:13:03.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.379 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3987175 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3987175 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3987175 ']' 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.379 20:18:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.379 [2024-07-15 20:18:41.559239] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:13:03.379 [2024-07-15 20:18:41.559308] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.380 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.380 [2024-07-15 20:18:41.625066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.380 [2024-07-15 20:18:41.718068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.380 [2024-07-15 20:18:41.718131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.380 [2024-07-15 20:18:41.718155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.380 [2024-07-15 20:18:41.718169] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.380 [2024-07-15 20:18:41.718181] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.380 [2024-07-15 20:18:41.718283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.380 [2024-07-15 20:18:41.718347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.380 [2024-07-15 20:18:41.718398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.380 [2024-07-15 20:18:41.718401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.380 20:18:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.380 20:18:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:03.380 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.380 20:18:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:03.380 20:18:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.380 20:18:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.380 20:18:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:03.380 20:18:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2181 00:13:03.637 [2024-07-15 20:18:42.158673] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:03.895 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:03.895 { 00:13:03.895 "nqn": "nqn.2016-06.io.spdk:cnode2181", 00:13:03.895 "tgt_name": "foobar", 00:13:03.895 "method": "nvmf_create_subsystem", 00:13:03.895 "req_id": 1 00:13:03.895 } 00:13:03.895 Got JSON-RPC error response 00:13:03.895 response: 00:13:03.895 { 00:13:03.895 "code": -32603, 00:13:03.895 "message": "Unable to find target foobar" 00:13:03.895 }' 00:13:03.895 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:03.895 { 00:13:03.895 "nqn": "nqn.2016-06.io.spdk:cnode2181", 00:13:03.895 "tgt_name": "foobar", 00:13:03.895 "method": "nvmf_create_subsystem", 00:13:03.895 "req_id": 1 00:13:03.895 } 00:13:03.895 Got JSON-RPC error response 00:13:03.895 response: 00:13:03.895 { 00:13:03.895 "code": -32603, 00:13:03.895 "message": "Unable to find target foobar" 00:13:03.895 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:03.895 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:03.895 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19098 00:13:03.895 [2024-07-15 20:18:42.415543] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19098: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:04.153 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:04.153 { 00:13:04.153 "nqn": "nqn.2016-06.io.spdk:cnode19098", 00:13:04.153 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:04.153 "method": "nvmf_create_subsystem", 00:13:04.153 "req_id": 1 00:13:04.153 } 00:13:04.153 Got JSON-RPC error response 00:13:04.153 response: 00:13:04.153 { 00:13:04.153 "code": -32602, 00:13:04.153 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:04.153 }' 00:13:04.153 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:04.153 { 00:13:04.153 "nqn": "nqn.2016-06.io.spdk:cnode19098", 00:13:04.153 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:04.153 "method": "nvmf_create_subsystem", 00:13:04.153 "req_id": 1 00:13:04.153 } 00:13:04.153 Got JSON-RPC error response 00:13:04.153 response: 00:13:04.153 { 00:13:04.153 "code": -32602, 00:13:04.153 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:04.153 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:04.153 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:04.153 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27596 00:13:04.411 [2024-07-15 20:18:42.684491] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27596: invalid model number 'SPDK_Controller' 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:04.411 { 00:13:04.411 "nqn": "nqn.2016-06.io.spdk:cnode27596", 00:13:04.411 "model_number": "SPDK_Controller\u001f", 00:13:04.411 "method": "nvmf_create_subsystem", 00:13:04.411 "req_id": 1 00:13:04.411 } 00:13:04.411 Got JSON-RPC error response 00:13:04.411 response: 00:13:04.411 { 00:13:04.411 "code": -32602, 00:13:04.411 "message": "Invalid MN SPDK_Controller\u001f" 00:13:04.411 }' 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:04.411 { 00:13:04.411 "nqn": "nqn.2016-06.io.spdk:cnode27596", 00:13:04.411 "model_number": "SPDK_Controller\u001f", 00:13:04.411 "method": "nvmf_create_subsystem", 00:13:04.411 "req_id": 1 00:13:04.411 } 00:13:04.411 Got JSON-RPC error response 00:13:04.411 response: 00:13:04.411 { 00:13:04.411 "code": -32602, 00:13:04.411 "message": "Invalid MN SPDK_Controller\u001f" 00:13:04.411 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:04.411 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'bcxV?.}_fDmb6C{5X'\''Q8^' 00:13:04.412 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'bcxV?.}_fDmb6C{5X'\''Q8^' nqn.2016-06.io.spdk:cnode13621 00:13:04.671 [2024-07-15 20:18:42.977439] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13621: invalid serial number 'bcxV?.}_fDmb6C{5X'Q8^' 00:13:04.671 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:04.671 { 00:13:04.671 "nqn": "nqn.2016-06.io.spdk:cnode13621", 00:13:04.671 "serial_number": "bcxV?.}_fDmb6C{5X'\''Q8^", 00:13:04.671 "method": "nvmf_create_subsystem", 00:13:04.671 "req_id": 1 00:13:04.671 } 00:13:04.671 Got JSON-RPC error response 00:13:04.671 response: 00:13:04.671 { 00:13:04.671 "code": -32602, 00:13:04.671 "message": "Invalid SN bcxV?.}_fDmb6C{5X'\''Q8^" 00:13:04.671 }' 00:13:04.671 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:04.671 { 00:13:04.671 "nqn": "nqn.2016-06.io.spdk:cnode13621", 00:13:04.671 "serial_number": "bcxV?.}_fDmb6C{5X'Q8^", 00:13:04.671 "method": "nvmf_create_subsystem", 00:13:04.671 "req_id": 1 00:13:04.671 } 00:13:04.671 Got JSON-RPC error response 00:13:04.671 response: 00:13:04.671 { 00:13:04.671 "code": -32602, 00:13:04.671 "message": "Invalid SN bcxV?.}_fDmb6C{5X'Q8^" 00:13:04.671 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:04.671 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:04.671 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:04.671 20:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:04.671 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ X == \- ]] 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'XQ$@Qw2ylDYgmXOQS1o62l'\''FAd9W]rAM^3*PcBy2>' 00:13:04.672 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'XQ$@Qw2ylDYgmXOQS1o62l'\''FAd9W]rAM^3*PcBy2>' nqn.2016-06.io.spdk:cnode25972 00:13:04.929 [2024-07-15 20:18:43.378755] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25972: invalid model number 'XQ$@Qw2ylDYgmXOQS1o62l'FAd9W]rAM^3*PcBy2>' 00:13:04.929 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:04.929 { 00:13:04.929 "nqn": "nqn.2016-06.io.spdk:cnode25972", 00:13:04.929 "model_number": "XQ$@Qw2ylDYgmXOQS1o62l'\''FAd9W]rAM^3*PcBy2>", 00:13:04.929 "method": "nvmf_create_subsystem", 00:13:04.929 "req_id": 1 00:13:04.929 } 00:13:04.929 Got JSON-RPC error response 00:13:04.929 response: 00:13:04.929 { 00:13:04.929 "code": -32602, 00:13:04.929 "message": "Invalid MN XQ$@Qw2ylDYgmXOQS1o62l'\''FAd9W]rAM^3*PcBy2>" 00:13:04.929 }' 00:13:04.929 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:04.929 { 00:13:04.929 "nqn": "nqn.2016-06.io.spdk:cnode25972", 00:13:04.929 "model_number": "XQ$@Qw2ylDYgmXOQS1o62l'FAd9W]rAM^3*PcBy2>", 00:13:04.929 "method": "nvmf_create_subsystem", 00:13:04.929 "req_id": 1 00:13:04.929 } 00:13:04.929 Got JSON-RPC error response 00:13:04.929 response: 00:13:04.929 { 00:13:04.929 "code": -32602, 00:13:04.929 "message": "Invalid MN XQ$@Qw2ylDYgmXOQS1o62l'FAd9W]rAM^3*PcBy2>" 00:13:04.929 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:04.929 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:05.186 [2024-07-15 20:18:43.627666] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.186 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:05.442 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:05.442 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:05.442 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:05.442 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:05.442 20:18:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:05.698 [2024-07-15 20:18:44.137431] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:05.698 20:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:05.698 { 00:13:05.698 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:05.698 "listen_address": { 00:13:05.698 "trtype": "tcp", 00:13:05.698 "traddr": "", 00:13:05.698 "trsvcid": "4421" 00:13:05.698 }, 00:13:05.698 "method": "nvmf_subsystem_remove_listener", 00:13:05.698 "req_id": 1 00:13:05.698 } 00:13:05.698 Got JSON-RPC error response 00:13:05.698 response: 00:13:05.698 { 00:13:05.698 "code": -32602, 00:13:05.698 "message": "Invalid parameters" 00:13:05.698 }' 00:13:05.698 20:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:05.698 { 00:13:05.698 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:05.698 "listen_address": { 00:13:05.698 "trtype": "tcp", 00:13:05.698 "traddr": "", 00:13:05.698 "trsvcid": "4421" 00:13:05.698 }, 00:13:05.698 "method": "nvmf_subsystem_remove_listener", 00:13:05.698 "req_id": 1 00:13:05.698 } 00:13:05.698 Got JSON-RPC error response 00:13:05.698 response: 00:13:05.698 { 00:13:05.698 "code": -32602, 00:13:05.698 "message": "Invalid parameters" 00:13:05.698 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:05.698 20:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6843 -i 0 00:13:05.955 [2024-07-15 20:18:44.378250] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6843: invalid cntlid range [0-65519] 00:13:05.955 20:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:05.955 { 00:13:05.955 "nqn": "nqn.2016-06.io.spdk:cnode6843", 00:13:05.955 "min_cntlid": 0, 00:13:05.955 "method": "nvmf_create_subsystem", 00:13:05.955 "req_id": 1 00:13:05.955 } 00:13:05.955 Got JSON-RPC error response 00:13:05.955 response: 00:13:05.955 { 00:13:05.955 "code": -32602, 00:13:05.955 "message": "Invalid cntlid range [0-65519]" 00:13:05.955 }' 00:13:05.955 20:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:05.955 { 00:13:05.955 "nqn": "nqn.2016-06.io.spdk:cnode6843", 00:13:05.955 "min_cntlid": 0, 00:13:05.955 "method": "nvmf_create_subsystem", 00:13:05.955 "req_id": 1 00:13:05.955 } 00:13:05.955 Got JSON-RPC error response 00:13:05.955 response: 00:13:05.955 { 00:13:05.955 "code": -32602, 00:13:05.955 "message": "Invalid cntlid range [0-65519]" 00:13:05.955 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.955 20:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26318 -i 65520 00:13:06.211 [2024-07-15 20:18:44.635084] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26318: invalid cntlid range [65520-65519] 00:13:06.211 20:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:06.212 { 00:13:06.212 "nqn": "nqn.2016-06.io.spdk:cnode26318", 00:13:06.212 "min_cntlid": 65520, 00:13:06.212 "method": "nvmf_create_subsystem", 00:13:06.212 "req_id": 1 00:13:06.212 } 00:13:06.212 Got JSON-RPC error response 00:13:06.212 response: 00:13:06.212 { 00:13:06.212 "code": -32602, 00:13:06.212 "message": "Invalid cntlid range [65520-65519]" 00:13:06.212 }' 00:13:06.212 20:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:06.212 { 00:13:06.212 "nqn": "nqn.2016-06.io.spdk:cnode26318", 00:13:06.212 "min_cntlid": 65520, 00:13:06.212 "method": "nvmf_create_subsystem", 00:13:06.212 "req_id": 1 00:13:06.212 } 00:13:06.212 Got JSON-RPC error response 00:13:06.212 response: 00:13:06.212 { 00:13:06.212 "code": -32602, 00:13:06.212 "message": "Invalid cntlid range [65520-65519]" 00:13:06.212 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.212 20:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3043 -I 0 00:13:06.469 [2024-07-15 20:18:44.900005] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3043: invalid cntlid range [1-0] 00:13:06.469 20:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:06.469 { 00:13:06.469 "nqn": "nqn.2016-06.io.spdk:cnode3043", 00:13:06.469 "max_cntlid": 0, 00:13:06.469 "method": "nvmf_create_subsystem", 00:13:06.469 "req_id": 1 00:13:06.469 } 00:13:06.469 Got JSON-RPC error response 00:13:06.469 response: 00:13:06.469 { 00:13:06.469 "code": -32602, 00:13:06.469 "message": "Invalid cntlid range [1-0]" 00:13:06.469 }' 00:13:06.469 20:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:06.469 { 00:13:06.469 "nqn": "nqn.2016-06.io.spdk:cnode3043", 00:13:06.469 "max_cntlid": 0, 00:13:06.469 "method": "nvmf_create_subsystem", 00:13:06.469 "req_id": 1 00:13:06.469 } 00:13:06.469 Got JSON-RPC error response 00:13:06.469 response: 00:13:06.469 { 00:13:06.469 "code": -32602, 00:13:06.469 "message": "Invalid cntlid range [1-0]" 00:13:06.469 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.469 20:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10855 -I 65520 00:13:06.725 [2024-07-15 20:18:45.148806] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10855: invalid cntlid range [1-65520] 00:13:06.725 20:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:06.725 { 00:13:06.725 "nqn": "nqn.2016-06.io.spdk:cnode10855", 00:13:06.725 "max_cntlid": 65520, 00:13:06.725 "method": "nvmf_create_subsystem", 00:13:06.725 "req_id": 1 00:13:06.725 } 00:13:06.725 Got JSON-RPC error response 00:13:06.725 response: 00:13:06.725 { 00:13:06.725 "code": -32602, 00:13:06.725 "message": "Invalid cntlid range [1-65520]" 00:13:06.725 }' 00:13:06.725 20:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:06.725 { 00:13:06.725 "nqn": "nqn.2016-06.io.spdk:cnode10855", 00:13:06.725 "max_cntlid": 65520, 00:13:06.725 "method": "nvmf_create_subsystem", 00:13:06.725 "req_id": 1 00:13:06.725 } 00:13:06.725 Got JSON-RPC error response 00:13:06.725 response: 00:13:06.725 { 00:13:06.725 "code": -32602, 00:13:06.725 "message": "Invalid cntlid range [1-65520]" 00:13:06.725 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.725 20:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16745 -i 6 -I 5 00:13:06.983 [2024-07-15 20:18:45.393603] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16745: invalid cntlid range [6-5] 00:13:06.983 20:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:06.983 { 00:13:06.983 "nqn": "nqn.2016-06.io.spdk:cnode16745", 00:13:06.983 "min_cntlid": 6, 00:13:06.983 "max_cntlid": 5, 00:13:06.983 "method": "nvmf_create_subsystem", 00:13:06.983 "req_id": 1 00:13:06.983 } 00:13:06.983 Got JSON-RPC error response 00:13:06.983 response: 00:13:06.983 { 00:13:06.983 "code": -32602, 00:13:06.983 "message": "Invalid cntlid range [6-5]" 00:13:06.983 }' 00:13:06.983 20:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:06.983 { 00:13:06.983 "nqn": "nqn.2016-06.io.spdk:cnode16745", 00:13:06.983 "min_cntlid": 6, 00:13:06.983 "max_cntlid": 5, 00:13:06.983 "method": "nvmf_create_subsystem", 00:13:06.983 "req_id": 1 00:13:06.983 } 00:13:06.983 Got JSON-RPC error response 00:13:06.983 response: 00:13:06.983 { 00:13:06.983 "code": -32602, 00:13:06.983 "message": "Invalid cntlid range [6-5]" 00:13:06.983 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.983 20:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:07.241 20:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:07.241 { 00:13:07.241 "name": "foobar", 00:13:07.241 "method": "nvmf_delete_target", 00:13:07.241 "req_id": 1 00:13:07.241 } 00:13:07.241 Got JSON-RPC error response 00:13:07.241 response: 00:13:07.241 { 00:13:07.241 "code": -32602, 00:13:07.241 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:07.241 }' 00:13:07.241 20:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:07.241 { 00:13:07.241 "name": "foobar", 00:13:07.241 "method": "nvmf_delete_target", 00:13:07.241 "req_id": 1 00:13:07.241 } 00:13:07.241 Got JSON-RPC error response 00:13:07.241 response: 00:13:07.242 { 00:13:07.242 "code": -32602, 00:13:07.242 "message": "The specified target doesn't exist, cannot delete it." 00:13:07.242 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.242 rmmod nvme_tcp 00:13:07.242 rmmod nvme_fabrics 00:13:07.242 rmmod nvme_keyring 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3987175 ']' 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3987175 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3987175 ']' 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3987175 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3987175 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3987175' 00:13:07.242 killing process with pid 3987175 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3987175 00:13:07.242 20:18:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3987175 00:13:07.501 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.501 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:07.501 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:07.501 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.501 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:07.501 20:18:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.501 20:18:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.501 20:18:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.447 20:18:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:09.447 00:13:09.447 real 0m8.591s 00:13:09.447 user 0m20.034s 00:13:09.447 sys 0m2.385s 00:13:09.447 20:18:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:09.447 20:18:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.447 ************************************ 00:13:09.447 END TEST nvmf_invalid 00:13:09.447 ************************************ 00:13:09.447 20:18:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:09.447 20:18:47 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:09.447 20:18:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:09.447 20:18:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.447 20:18:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:09.447 ************************************ 00:13:09.447 START TEST nvmf_abort 00:13:09.447 ************************************ 00:13:09.447 20:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:09.447 * Looking for test storage... 00:13:09.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.447 20:18:47 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.447 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:09.447 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.447 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.447 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.447 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.447 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.447 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.447 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.447 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.447 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.447 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:09.706 20:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:11.641 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:11.641 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.641 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:11.641 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:11.642 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:11.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:13:11.642 00:13:11.642 --- 10.0.0.2 ping statistics --- 00:13:11.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.642 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:13:11.642 00:13:11.642 --- 10.0.0.1 ping statistics --- 00:13:11.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.642 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3989723 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3989723 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3989723 ']' 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.642 20:18:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.642 [2024-07-15 20:18:50.045289] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:13:11.642 [2024-07-15 20:18:50.045379] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.642 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.642 [2024-07-15 20:18:50.113406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:11.919 [2024-07-15 20:18:50.208588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.919 [2024-07-15 20:18:50.208650] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.919 [2024-07-15 20:18:50.208662] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.919 [2024-07-15 20:18:50.208673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.919 [2024-07-15 20:18:50.208682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.919 [2024-07-15 20:18:50.208765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.919 [2024-07-15 20:18:50.208832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.919 [2024-07-15 20:18:50.208835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.919 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.919 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:13:11.919 20:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:11.919 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:11.919 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.920 [2024-07-15 20:18:50.345227] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.920 Malloc0 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.920 Delay0 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.920 [2024-07-15 20:18:50.414498] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.920 20:18:50 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:12.178 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.178 [2024-07-15 20:18:50.521261] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:14.706 Initializing NVMe Controllers 00:13:14.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:14.706 controller IO queue size 128 less than required 00:13:14.706 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:14.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:14.706 Initialization complete. Launching workers. 00:13:14.706 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33364 00:13:14.706 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33425, failed to submit 62 00:13:14.706 success 33368, unsuccess 57, failed 0 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.706 rmmod nvme_tcp 00:13:14.706 rmmod nvme_fabrics 00:13:14.706 rmmod nvme_keyring 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3989723 ']' 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3989723 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3989723 ']' 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3989723 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3989723 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3989723' 00:13:14.706 killing process with pid 3989723 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3989723 00:13:14.706 20:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3989723 00:13:14.706 20:18:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.706 20:18:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.706 20:18:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.706 20:18:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.706 20:18:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.706 20:18:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.706 20:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.706 20:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.608 20:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:16.608 00:13:16.608 real 0m7.162s 00:13:16.608 user 0m10.644s 00:13:16.608 sys 0m2.506s 00:13:16.608 20:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:16.608 20:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:16.608 ************************************ 00:13:16.608 END TEST nvmf_abort 00:13:16.608 ************************************ 00:13:16.608 20:18:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:16.608 20:18:55 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:16.608 20:18:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:16.608 20:18:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.608 20:18:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:16.608 ************************************ 00:13:16.608 START TEST nvmf_ns_hotplug_stress 00:13:16.608 ************************************ 00:13:16.608 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:16.866 * Looking for test storage... 00:13:16.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:16.866 20:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:18.765 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:18.765 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:18.765 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:18.765 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:18.765 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:19.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:13:19.023 00:13:19.023 --- 10.0.0.2 ping statistics --- 00:13:19.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.023 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:13:19.023 00:13:19.023 --- 10.0.0.1 ping statistics --- 00:13:19.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.023 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3992062 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3992062 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3992062 ']' 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.023 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.023 [2024-07-15 20:18:57.416948] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:13:19.023 [2024-07-15 20:18:57.417033] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.023 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.023 [2024-07-15 20:18:57.480684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:19.281 [2024-07-15 20:18:57.570100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.281 [2024-07-15 20:18:57.570168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.281 [2024-07-15 20:18:57.570182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.281 [2024-07-15 20:18:57.570192] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.281 [2024-07-15 20:18:57.570202] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.281 [2024-07-15 20:18:57.570294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.281 [2024-07-15 20:18:57.570357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.281 [2024-07-15 20:18:57.570359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.281 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.281 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:13:19.281 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:19.281 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:19.281 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.281 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.281 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:19.281 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:19.538 [2024-07-15 20:18:57.935301] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.538 20:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:19.795 20:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.052 [2024-07-15 20:18:58.454321] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.052 20:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:20.309 20:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:20.566 Malloc0 00:13:20.566 20:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:20.823 Delay0 00:13:20.823 20:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.081 20:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:21.338 NULL1 00:13:21.338 20:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:21.596 20:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3992365 00:13:21.596 20:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:21.596 20:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.596 20:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:21.596 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.853 20:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.110 20:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:22.110 20:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:22.368 true 00:13:22.368 20:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:22.368 20:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.626 20:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.883 20:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:22.883 20:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:23.141 true 00:13:23.141 20:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:23.141 20:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.399 20:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.656 20:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:23.656 20:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:23.914 true 00:13:23.914 20:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:23.914 20:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.846 Read completed with error (sct=0, sc=11) 00:13:24.846 20:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.103 20:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:25.103 20:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:25.360 true 00:13:25.360 20:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:25.360 20:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.617 20:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.875 20:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:25.875 20:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:26.132 true 00:13:26.132 20:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:26.132 20:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.138 20:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.138 20:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:27.138 20:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:27.413 true 00:13:27.413 20:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:27.413 20:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.673 20:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.930 20:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:27.930 20:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:28.186 true 00:13:28.186 20:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:28.186 20:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.116 20:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.373 20:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:29.373 20:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:29.631 true 00:13:29.631 20:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:29.631 20:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.888 20:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.144 20:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:30.144 20:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:30.400 true 00:13:30.400 20:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:30.400 20:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.330 20:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.330 20:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:31.330 20:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:31.587 true 00:13:31.587 20:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:31.587 20:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.844 20:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.102 20:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:32.102 20:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:32.360 true 00:13:32.360 20:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:32.360 20:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.298 20:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.557 20:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:33.557 20:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:33.815 true 00:13:33.815 20:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:33.815 20:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.072 20:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.329 20:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:34.329 20:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:34.587 true 00:13:34.587 20:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:34.587 20:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.524 20:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.781 20:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:35.781 20:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:36.039 true 00:13:36.039 20:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:36.039 20:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.298 20:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.557 20:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:36.557 20:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:36.815 true 00:13:36.815 20:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:36.815 20:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.073 20:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.330 20:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:37.330 20:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:37.330 true 00:13:37.587 20:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:37.587 20:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.534 20:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.791 20:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:38.791 20:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:39.047 true 00:13:39.047 20:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:39.047 20:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.304 20:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.562 20:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:39.562 20:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:39.819 true 00:13:39.819 20:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:39.819 20:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.779 20:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.035 20:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:41.035 20:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:41.292 true 00:13:41.292 20:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:41.292 20:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.550 20:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.807 20:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:41.807 20:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:42.064 true 00:13:42.064 20:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:42.064 20:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.997 20:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.997 20:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:42.997 20:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:43.254 true 00:13:43.254 20:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:43.254 20:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.512 20:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.769 20:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:43.769 20:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:44.027 true 00:13:44.027 20:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:44.027 20:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.958 20:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.214 20:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:45.214 20:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:45.470 true 00:13:45.470 20:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:45.470 20:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.748 20:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.005 20:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:46.005 20:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:46.261 true 00:13:46.261 20:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:46.261 20:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.191 20:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.447 20:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:47.447 20:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:47.707 true 00:13:47.707 20:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:47.707 20:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.965 20:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.223 20:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:48.223 20:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:48.480 true 00:13:48.480 20:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:48.480 20:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.417 20:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.417 20:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:49.417 20:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:49.675 true 00:13:49.675 20:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:49.675 20:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.932 20:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.188 20:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:50.188 20:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:50.445 true 00:13:50.445 20:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:50.445 20:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.377 20:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.636 20:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:51.636 20:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:51.894 Initializing NVMe Controllers 00:13:51.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:51.894 Controller IO queue size 128, less than required. 00:13:51.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:51.894 Controller IO queue size 128, less than required. 00:13:51.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:51.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:51.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:51.894 Initialization complete. Launching workers. 00:13:51.894 ======================================================== 00:13:51.894 Latency(us) 00:13:51.894 Device Information : IOPS MiB/s Average min max 00:13:51.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 663.75 0.32 93560.07 2730.61 1091565.70 00:13:51.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10147.28 4.95 12577.02 2977.28 450041.76 00:13:51.894 ======================================================== 00:13:51.894 Total : 10811.03 5.28 17549.03 2730.61 1091565.70 00:13:51.894 00:13:51.894 true 00:13:52.154 20:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3992365 00:13:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3992365) - No such process 00:13:52.154 20:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3992365 00:13:52.154 20:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.154 20:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.719 20:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:52.719 20:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:52.719 20:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:52.719 20:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:52.719 20:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:52.719 null0 00:13:52.719 20:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:52.719 20:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:52.719 20:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:52.977 null1 00:13:52.977 20:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:52.977 20:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:52.977 20:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:53.235 null2 00:13:53.235 20:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:53.235 20:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:53.235 20:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:53.494 null3 00:13:53.494 20:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:53.494 20:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:53.494 20:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:53.796 null4 00:13:53.796 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:53.796 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:53.796 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:54.077 null5 00:13:54.077 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:54.077 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:54.077 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:54.334 null6 00:13:54.334 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:54.334 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:54.334 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:54.592 null7 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3996290 3996291 3996292 3996295 3996297 3996299 3996301 3996303 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.593 20:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:54.852 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:54.852 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:54.852 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:54.852 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.852 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:54.852 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:54.852 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:54.852 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:55.109 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.109 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.109 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:55.109 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.109 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.109 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:55.109 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.109 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.109 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:55.109 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.109 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.110 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:55.110 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.110 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.110 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:55.110 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.110 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.110 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:55.110 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.110 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.110 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:55.110 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.110 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.110 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:55.367 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.367 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:55.367 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:55.367 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:55.367 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:55.367 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.367 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.367 20:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.626 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:55.884 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.884 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:55.884 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:55.884 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:55.884 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.884 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.884 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:55.884 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.143 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.402 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.402 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.402 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:56.402 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:56.402 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:56.402 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.402 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.402 20:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.660 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.918 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.918 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:56.918 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.918 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.918 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:56.918 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:56.918 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.918 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.175 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.176 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.176 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.433 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.433 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.433 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.433 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.433 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.690 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.690 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.690 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.690 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.690 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.690 20:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.948 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.207 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.207 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.207 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.207 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.207 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.207 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.207 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.207 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.464 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.464 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.464 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.464 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.464 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.464 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.464 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.465 20:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.722 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.722 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.723 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.723 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.723 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.723 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.723 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.723 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.981 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.239 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.239 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.239 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.239 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.239 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.239 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.239 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.239 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.497 20:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.754 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.754 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.754 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.754 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.754 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.754 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.754 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.754 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:00.011 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.011 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:00.012 rmmod nvme_tcp 00:14:00.012 rmmod nvme_fabrics 00:14:00.012 rmmod nvme_keyring 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3992062 ']' 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3992062 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3992062 ']' 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3992062 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:00.012 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3992062 00:14:00.272 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:00.272 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:00.272 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3992062' 00:14:00.272 killing process with pid 3992062 00:14:00.272 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3992062 00:14:00.272 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3992062 00:14:00.272 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.272 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.272 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.272 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.272 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.272 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.272 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.272 20:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.805 20:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.805 00:14:02.805 real 0m45.702s 00:14:02.805 user 3m28.848s 00:14:02.805 sys 0m16.206s 00:14:02.805 20:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:02.805 20:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.805 ************************************ 00:14:02.805 END TEST nvmf_ns_hotplug_stress 00:14:02.805 ************************************ 00:14:02.805 20:19:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:02.805 20:19:40 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:02.805 20:19:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:02.805 20:19:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.805 20:19:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.805 ************************************ 00:14:02.805 START TEST nvmf_connect_stress 00:14:02.805 ************************************ 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:02.805 * Looking for test storage... 00:14:02.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.805 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.806 20:19:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:04.707 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:04.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:04.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:04.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:04.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:14:04.707 00:14:04.707 --- 10.0.0.2 ping statistics --- 00:14:04.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.707 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:14:04.707 00:14:04.707 --- 10.0.0.1 ping statistics --- 00:14:04.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.707 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.707 20:19:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:04.707 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:04.707 20:19:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:04.707 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:04.707 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.707 20:19:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3999054 00:14:04.707 20:19:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3999054 00:14:04.707 20:19:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:04.707 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3999054 ']' 00:14:04.707 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.707 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.708 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.708 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.708 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.708 [2024-07-15 20:19:43.054734] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:14:04.708 [2024-07-15 20:19:43.054817] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.708 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.708 [2024-07-15 20:19:43.123292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:04.708 [2024-07-15 20:19:43.214692] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.708 [2024-07-15 20:19:43.214756] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.708 [2024-07-15 20:19:43.214772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.708 [2024-07-15 20:19:43.214786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.708 [2024-07-15 20:19:43.214797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.708 [2024-07-15 20:19:43.214919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.708 [2024-07-15 20:19:43.215019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:04.708 [2024-07-15 20:19:43.215023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.965 [2024-07-15 20:19:43.360437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.965 [2024-07-15 20:19:43.390055] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.965 NULL1 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3999197 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.965 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.966 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.529 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.529 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:05.529 20:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.529 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.529 20:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.787 20:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.787 20:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:05.787 20:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.787 20:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.787 20:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.045 20:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.046 20:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:06.046 20:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.046 20:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.046 20:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.303 20:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.303 20:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:06.303 20:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.303 20:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.303 20:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.560 20:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.560 20:19:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:06.560 20:19:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.560 20:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.560 20:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.125 20:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.125 20:19:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:07.125 20:19:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.125 20:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.125 20:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.398 20:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.398 20:19:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:07.398 20:19:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.398 20:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.398 20:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.676 20:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.676 20:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:07.676 20:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.676 20:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.676 20:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.932 20:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.932 20:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:07.932 20:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.932 20:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.932 20:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.189 20:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.189 20:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:08.189 20:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.189 20:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.189 20:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.754 20:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.754 20:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:08.754 20:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.754 20:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.754 20:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.011 20:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.011 20:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:09.011 20:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.011 20:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.011 20:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.268 20:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.268 20:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:09.268 20:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.268 20:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.268 20:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.525 20:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.525 20:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:09.525 20:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.525 20:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.525 20:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.782 20:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.782 20:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:09.782 20:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.782 20:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.782 20:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.347 20:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.347 20:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:10.347 20:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.347 20:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.347 20:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.605 20:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.605 20:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:10.605 20:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.605 20:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.605 20:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.863 20:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.863 20:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:10.863 20:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.863 20:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.863 20:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.121 20:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.121 20:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:11.121 20:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.121 20:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.121 20:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.378 20:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.378 20:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:11.378 20:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.378 20:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.378 20:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.943 20:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.943 20:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:11.943 20:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.943 20:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.943 20:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.200 20:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.200 20:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:12.200 20:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.200 20:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.200 20:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.458 20:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.458 20:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:12.458 20:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.458 20:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.458 20:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.715 20:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.715 20:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:12.715 20:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.715 20:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.715 20:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.973 20:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.973 20:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:12.973 20:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.973 20:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.973 20:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.538 20:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.538 20:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:13.538 20:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.538 20:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.538 20:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.794 20:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.794 20:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:13.794 20:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.794 20:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.794 20:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.050 20:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.050 20:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:14.050 20:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.050 20:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.050 20:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.307 20:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.307 20:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:14.307 20:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.307 20:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.307 20:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.564 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.564 20:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:14.564 20:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.564 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.564 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.136 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.136 20:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:15.136 20:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.136 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.136 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.136 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3999197 00:14:15.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3999197) - No such process 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3999197 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.393 rmmod nvme_tcp 00:14:15.393 rmmod nvme_fabrics 00:14:15.393 rmmod nvme_keyring 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3999054 ']' 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3999054 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3999054 ']' 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3999054 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3999054 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3999054' 00:14:15.393 killing process with pid 3999054 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3999054 00:14:15.393 20:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3999054 00:14:15.651 20:19:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.651 20:19:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.651 20:19:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.651 20:19:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.651 20:19:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.651 20:19:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.651 20:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.651 20:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.181 20:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:18.181 00:14:18.181 real 0m15.216s 00:14:18.181 user 0m38.012s 00:14:18.181 sys 0m6.007s 00:14:18.181 20:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:18.181 20:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.181 ************************************ 00:14:18.181 END TEST nvmf_connect_stress 00:14:18.181 ************************************ 00:14:18.181 20:19:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:18.181 20:19:56 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:18.181 20:19:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:18.181 20:19:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.181 20:19:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:18.181 ************************************ 00:14:18.181 START TEST nvmf_fused_ordering 00:14:18.181 ************************************ 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:18.181 * Looking for test storage... 00:14:18.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.181 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:18.182 20:19:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:20.088 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:20.088 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:20.088 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:20.089 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:20.089 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:20.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:14:20.089 00:14:20.089 --- 10.0.0.2 ping statistics --- 00:14:20.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.089 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:14:20.089 00:14:20.089 --- 10.0.0.1 ping statistics --- 00:14:20.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.089 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=4002340 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 4002340 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 4002340 ']' 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.089 [2024-07-15 20:19:58.320532] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:14:20.089 [2024-07-15 20:19:58.320616] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.089 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.089 [2024-07-15 20:19:58.388822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.089 [2024-07-15 20:19:58.477817] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.089 [2024-07-15 20:19:58.477890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.089 [2024-07-15 20:19:58.477908] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.089 [2024-07-15 20:19:58.477921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.089 [2024-07-15 20:19:58.477933] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.089 [2024-07-15 20:19:58.477971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:20.089 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.348 20:19:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.348 20:19:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.348 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.348 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.348 [2024-07-15 20:19:58.626008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.349 [2024-07-15 20:19:58.642213] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.349 NULL1 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.349 20:19:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:20.349 [2024-07-15 20:19:58.687009] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:14:20.349 [2024-07-15 20:19:58.687051] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002367 ] 00:14:20.349 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.913 Attached to nqn.2016-06.io.spdk:cnode1 00:14:20.913 Namespace ID: 1 size: 1GB 00:14:20.913 fused_ordering(0) 00:14:20.913 fused_ordering(1) 00:14:20.913 fused_ordering(2) 00:14:20.913 fused_ordering(3) 00:14:20.913 fused_ordering(4) 00:14:20.913 fused_ordering(5) 00:14:20.913 fused_ordering(6) 00:14:20.913 fused_ordering(7) 00:14:20.913 fused_ordering(8) 00:14:20.913 fused_ordering(9) 00:14:20.913 fused_ordering(10) 00:14:20.913 fused_ordering(11) 00:14:20.913 fused_ordering(12) 00:14:20.913 fused_ordering(13) 00:14:20.913 fused_ordering(14) 00:14:20.913 fused_ordering(15) 00:14:20.913 fused_ordering(16) 00:14:20.913 fused_ordering(17) 00:14:20.913 fused_ordering(18) 00:14:20.914 fused_ordering(19) 00:14:20.914 fused_ordering(20) 00:14:20.914 fused_ordering(21) 00:14:20.914 fused_ordering(22) 00:14:20.914 fused_ordering(23) 00:14:20.914 fused_ordering(24) 00:14:20.914 fused_ordering(25) 00:14:20.914 fused_ordering(26) 00:14:20.914 fused_ordering(27) 00:14:20.914 fused_ordering(28) 00:14:20.914 fused_ordering(29) 00:14:20.914 fused_ordering(30) 00:14:20.914 fused_ordering(31) 00:14:20.914 fused_ordering(32) 00:14:20.914 fused_ordering(33) 00:14:20.914 fused_ordering(34) 00:14:20.914 fused_ordering(35) 00:14:20.914 fused_ordering(36) 00:14:20.914 fused_ordering(37) 00:14:20.914 fused_ordering(38) 00:14:20.914 fused_ordering(39) 00:14:20.914 fused_ordering(40) 00:14:20.914 fused_ordering(41) 00:14:20.914 fused_ordering(42) 00:14:20.914 fused_ordering(43) 00:14:20.914 fused_ordering(44) 00:14:20.914 fused_ordering(45) 00:14:20.914 fused_ordering(46) 00:14:20.914 fused_ordering(47) 00:14:20.914 fused_ordering(48) 00:14:20.914 fused_ordering(49) 00:14:20.914 fused_ordering(50) 00:14:20.914 fused_ordering(51) 00:14:20.914 fused_ordering(52) 00:14:20.914 fused_ordering(53) 00:14:20.914 fused_ordering(54) 00:14:20.914 fused_ordering(55) 00:14:20.914 fused_ordering(56) 00:14:20.914 fused_ordering(57) 00:14:20.914 fused_ordering(58) 00:14:20.914 fused_ordering(59) 00:14:20.914 fused_ordering(60) 00:14:20.914 fused_ordering(61) 00:14:20.914 fused_ordering(62) 00:14:20.914 fused_ordering(63) 00:14:20.914 fused_ordering(64) 00:14:20.914 fused_ordering(65) 00:14:20.914 fused_ordering(66) 00:14:20.914 fused_ordering(67) 00:14:20.914 fused_ordering(68) 00:14:20.914 fused_ordering(69) 00:14:20.914 fused_ordering(70) 00:14:20.914 fused_ordering(71) 00:14:20.914 fused_ordering(72) 00:14:20.914 fused_ordering(73) 00:14:20.914 fused_ordering(74) 00:14:20.914 fused_ordering(75) 00:14:20.914 fused_ordering(76) 00:14:20.914 fused_ordering(77) 00:14:20.914 fused_ordering(78) 00:14:20.914 fused_ordering(79) 00:14:20.914 fused_ordering(80) 00:14:20.914 fused_ordering(81) 00:14:20.914 fused_ordering(82) 00:14:20.914 fused_ordering(83) 00:14:20.914 fused_ordering(84) 00:14:20.914 fused_ordering(85) 00:14:20.914 fused_ordering(86) 00:14:20.914 fused_ordering(87) 00:14:20.914 fused_ordering(88) 00:14:20.914 fused_ordering(89) 00:14:20.914 fused_ordering(90) 00:14:20.914 fused_ordering(91) 00:14:20.914 fused_ordering(92) 00:14:20.914 fused_ordering(93) 00:14:20.914 fused_ordering(94) 00:14:20.914 fused_ordering(95) 00:14:20.914 fused_ordering(96) 00:14:20.914 fused_ordering(97) 00:14:20.914 fused_ordering(98) 00:14:20.914 fused_ordering(99) 00:14:20.914 fused_ordering(100) 00:14:20.914 fused_ordering(101) 00:14:20.914 fused_ordering(102) 00:14:20.914 fused_ordering(103) 00:14:20.914 fused_ordering(104) 00:14:20.914 fused_ordering(105) 00:14:20.914 fused_ordering(106) 00:14:20.914 fused_ordering(107) 00:14:20.914 fused_ordering(108) 00:14:20.914 fused_ordering(109) 00:14:20.914 fused_ordering(110) 00:14:20.914 fused_ordering(111) 00:14:20.914 fused_ordering(112) 00:14:20.914 fused_ordering(113) 00:14:20.914 fused_ordering(114) 00:14:20.914 fused_ordering(115) 00:14:20.914 fused_ordering(116) 00:14:20.914 fused_ordering(117) 00:14:20.914 fused_ordering(118) 00:14:20.914 fused_ordering(119) 00:14:20.914 fused_ordering(120) 00:14:20.914 fused_ordering(121) 00:14:20.914 fused_ordering(122) 00:14:20.914 fused_ordering(123) 00:14:20.914 fused_ordering(124) 00:14:20.914 fused_ordering(125) 00:14:20.914 fused_ordering(126) 00:14:20.914 fused_ordering(127) 00:14:20.914 fused_ordering(128) 00:14:20.914 fused_ordering(129) 00:14:20.914 fused_ordering(130) 00:14:20.914 fused_ordering(131) 00:14:20.914 fused_ordering(132) 00:14:20.914 fused_ordering(133) 00:14:20.914 fused_ordering(134) 00:14:20.914 fused_ordering(135) 00:14:20.914 fused_ordering(136) 00:14:20.914 fused_ordering(137) 00:14:20.914 fused_ordering(138) 00:14:20.914 fused_ordering(139) 00:14:20.914 fused_ordering(140) 00:14:20.914 fused_ordering(141) 00:14:20.914 fused_ordering(142) 00:14:20.914 fused_ordering(143) 00:14:20.914 fused_ordering(144) 00:14:20.914 fused_ordering(145) 00:14:20.914 fused_ordering(146) 00:14:20.914 fused_ordering(147) 00:14:20.914 fused_ordering(148) 00:14:20.914 fused_ordering(149) 00:14:20.914 fused_ordering(150) 00:14:20.914 fused_ordering(151) 00:14:20.914 fused_ordering(152) 00:14:20.914 fused_ordering(153) 00:14:20.914 fused_ordering(154) 00:14:20.914 fused_ordering(155) 00:14:20.914 fused_ordering(156) 00:14:20.914 fused_ordering(157) 00:14:20.914 fused_ordering(158) 00:14:20.914 fused_ordering(159) 00:14:20.914 fused_ordering(160) 00:14:20.914 fused_ordering(161) 00:14:20.914 fused_ordering(162) 00:14:20.914 fused_ordering(163) 00:14:20.914 fused_ordering(164) 00:14:20.914 fused_ordering(165) 00:14:20.914 fused_ordering(166) 00:14:20.914 fused_ordering(167) 00:14:20.914 fused_ordering(168) 00:14:20.914 fused_ordering(169) 00:14:20.914 fused_ordering(170) 00:14:20.914 fused_ordering(171) 00:14:20.914 fused_ordering(172) 00:14:20.914 fused_ordering(173) 00:14:20.914 fused_ordering(174) 00:14:20.914 fused_ordering(175) 00:14:20.914 fused_ordering(176) 00:14:20.914 fused_ordering(177) 00:14:20.914 fused_ordering(178) 00:14:20.914 fused_ordering(179) 00:14:20.914 fused_ordering(180) 00:14:20.914 fused_ordering(181) 00:14:20.914 fused_ordering(182) 00:14:20.914 fused_ordering(183) 00:14:20.914 fused_ordering(184) 00:14:20.914 fused_ordering(185) 00:14:20.914 fused_ordering(186) 00:14:20.914 fused_ordering(187) 00:14:20.914 fused_ordering(188) 00:14:20.914 fused_ordering(189) 00:14:20.914 fused_ordering(190) 00:14:20.914 fused_ordering(191) 00:14:20.914 fused_ordering(192) 00:14:20.914 fused_ordering(193) 00:14:20.914 fused_ordering(194) 00:14:20.914 fused_ordering(195) 00:14:20.914 fused_ordering(196) 00:14:20.914 fused_ordering(197) 00:14:20.914 fused_ordering(198) 00:14:20.914 fused_ordering(199) 00:14:20.914 fused_ordering(200) 00:14:20.914 fused_ordering(201) 00:14:20.914 fused_ordering(202) 00:14:20.914 fused_ordering(203) 00:14:20.914 fused_ordering(204) 00:14:20.914 fused_ordering(205) 00:14:21.481 fused_ordering(206) 00:14:21.481 fused_ordering(207) 00:14:21.481 fused_ordering(208) 00:14:21.481 fused_ordering(209) 00:14:21.481 fused_ordering(210) 00:14:21.481 fused_ordering(211) 00:14:21.481 fused_ordering(212) 00:14:21.481 fused_ordering(213) 00:14:21.481 fused_ordering(214) 00:14:21.481 fused_ordering(215) 00:14:21.481 fused_ordering(216) 00:14:21.481 fused_ordering(217) 00:14:21.481 fused_ordering(218) 00:14:21.481 fused_ordering(219) 00:14:21.481 fused_ordering(220) 00:14:21.481 fused_ordering(221) 00:14:21.481 fused_ordering(222) 00:14:21.481 fused_ordering(223) 00:14:21.481 fused_ordering(224) 00:14:21.481 fused_ordering(225) 00:14:21.481 fused_ordering(226) 00:14:21.481 fused_ordering(227) 00:14:21.481 fused_ordering(228) 00:14:21.481 fused_ordering(229) 00:14:21.481 fused_ordering(230) 00:14:21.481 fused_ordering(231) 00:14:21.481 fused_ordering(232) 00:14:21.481 fused_ordering(233) 00:14:21.481 fused_ordering(234) 00:14:21.481 fused_ordering(235) 00:14:21.481 fused_ordering(236) 00:14:21.481 fused_ordering(237) 00:14:21.481 fused_ordering(238) 00:14:21.481 fused_ordering(239) 00:14:21.481 fused_ordering(240) 00:14:21.481 fused_ordering(241) 00:14:21.481 fused_ordering(242) 00:14:21.481 fused_ordering(243) 00:14:21.481 fused_ordering(244) 00:14:21.481 fused_ordering(245) 00:14:21.481 fused_ordering(246) 00:14:21.481 fused_ordering(247) 00:14:21.481 fused_ordering(248) 00:14:21.481 fused_ordering(249) 00:14:21.481 fused_ordering(250) 00:14:21.481 fused_ordering(251) 00:14:21.481 fused_ordering(252) 00:14:21.481 fused_ordering(253) 00:14:21.481 fused_ordering(254) 00:14:21.481 fused_ordering(255) 00:14:21.481 fused_ordering(256) 00:14:21.481 fused_ordering(257) 00:14:21.481 fused_ordering(258) 00:14:21.481 fused_ordering(259) 00:14:21.481 fused_ordering(260) 00:14:21.481 fused_ordering(261) 00:14:21.481 fused_ordering(262) 00:14:21.481 fused_ordering(263) 00:14:21.481 fused_ordering(264) 00:14:21.481 fused_ordering(265) 00:14:21.481 fused_ordering(266) 00:14:21.481 fused_ordering(267) 00:14:21.481 fused_ordering(268) 00:14:21.481 fused_ordering(269) 00:14:21.481 fused_ordering(270) 00:14:21.481 fused_ordering(271) 00:14:21.481 fused_ordering(272) 00:14:21.481 fused_ordering(273) 00:14:21.481 fused_ordering(274) 00:14:21.481 fused_ordering(275) 00:14:21.481 fused_ordering(276) 00:14:21.481 fused_ordering(277) 00:14:21.481 fused_ordering(278) 00:14:21.481 fused_ordering(279) 00:14:21.481 fused_ordering(280) 00:14:21.481 fused_ordering(281) 00:14:21.481 fused_ordering(282) 00:14:21.481 fused_ordering(283) 00:14:21.481 fused_ordering(284) 00:14:21.481 fused_ordering(285) 00:14:21.481 fused_ordering(286) 00:14:21.481 fused_ordering(287) 00:14:21.481 fused_ordering(288) 00:14:21.481 fused_ordering(289) 00:14:21.481 fused_ordering(290) 00:14:21.481 fused_ordering(291) 00:14:21.481 fused_ordering(292) 00:14:21.481 fused_ordering(293) 00:14:21.481 fused_ordering(294) 00:14:21.481 fused_ordering(295) 00:14:21.481 fused_ordering(296) 00:14:21.481 fused_ordering(297) 00:14:21.481 fused_ordering(298) 00:14:21.481 fused_ordering(299) 00:14:21.481 fused_ordering(300) 00:14:21.481 fused_ordering(301) 00:14:21.481 fused_ordering(302) 00:14:21.481 fused_ordering(303) 00:14:21.481 fused_ordering(304) 00:14:21.481 fused_ordering(305) 00:14:21.481 fused_ordering(306) 00:14:21.481 fused_ordering(307) 00:14:21.481 fused_ordering(308) 00:14:21.481 fused_ordering(309) 00:14:21.481 fused_ordering(310) 00:14:21.481 fused_ordering(311) 00:14:21.481 fused_ordering(312) 00:14:21.481 fused_ordering(313) 00:14:21.481 fused_ordering(314) 00:14:21.481 fused_ordering(315) 00:14:21.481 fused_ordering(316) 00:14:21.481 fused_ordering(317) 00:14:21.481 fused_ordering(318) 00:14:21.481 fused_ordering(319) 00:14:21.481 fused_ordering(320) 00:14:21.481 fused_ordering(321) 00:14:21.481 fused_ordering(322) 00:14:21.481 fused_ordering(323) 00:14:21.481 fused_ordering(324) 00:14:21.481 fused_ordering(325) 00:14:21.481 fused_ordering(326) 00:14:21.481 fused_ordering(327) 00:14:21.481 fused_ordering(328) 00:14:21.481 fused_ordering(329) 00:14:21.481 fused_ordering(330) 00:14:21.481 fused_ordering(331) 00:14:21.481 fused_ordering(332) 00:14:21.481 fused_ordering(333) 00:14:21.481 fused_ordering(334) 00:14:21.481 fused_ordering(335) 00:14:21.481 fused_ordering(336) 00:14:21.481 fused_ordering(337) 00:14:21.481 fused_ordering(338) 00:14:21.481 fused_ordering(339) 00:14:21.481 fused_ordering(340) 00:14:21.481 fused_ordering(341) 00:14:21.481 fused_ordering(342) 00:14:21.481 fused_ordering(343) 00:14:21.481 fused_ordering(344) 00:14:21.481 fused_ordering(345) 00:14:21.481 fused_ordering(346) 00:14:21.481 fused_ordering(347) 00:14:21.481 fused_ordering(348) 00:14:21.481 fused_ordering(349) 00:14:21.481 fused_ordering(350) 00:14:21.481 fused_ordering(351) 00:14:21.481 fused_ordering(352) 00:14:21.481 fused_ordering(353) 00:14:21.481 fused_ordering(354) 00:14:21.481 fused_ordering(355) 00:14:21.481 fused_ordering(356) 00:14:21.481 fused_ordering(357) 00:14:21.481 fused_ordering(358) 00:14:21.481 fused_ordering(359) 00:14:21.481 fused_ordering(360) 00:14:21.481 fused_ordering(361) 00:14:21.481 fused_ordering(362) 00:14:21.481 fused_ordering(363) 00:14:21.481 fused_ordering(364) 00:14:21.481 fused_ordering(365) 00:14:21.481 fused_ordering(366) 00:14:21.481 fused_ordering(367) 00:14:21.481 fused_ordering(368) 00:14:21.481 fused_ordering(369) 00:14:21.481 fused_ordering(370) 00:14:21.481 fused_ordering(371) 00:14:21.481 fused_ordering(372) 00:14:21.481 fused_ordering(373) 00:14:21.481 fused_ordering(374) 00:14:21.481 fused_ordering(375) 00:14:21.481 fused_ordering(376) 00:14:21.481 fused_ordering(377) 00:14:21.482 fused_ordering(378) 00:14:21.482 fused_ordering(379) 00:14:21.482 fused_ordering(380) 00:14:21.482 fused_ordering(381) 00:14:21.482 fused_ordering(382) 00:14:21.482 fused_ordering(383) 00:14:21.482 fused_ordering(384) 00:14:21.482 fused_ordering(385) 00:14:21.482 fused_ordering(386) 00:14:21.482 fused_ordering(387) 00:14:21.482 fused_ordering(388) 00:14:21.482 fused_ordering(389) 00:14:21.482 fused_ordering(390) 00:14:21.482 fused_ordering(391) 00:14:21.482 fused_ordering(392) 00:14:21.482 fused_ordering(393) 00:14:21.482 fused_ordering(394) 00:14:21.482 fused_ordering(395) 00:14:21.482 fused_ordering(396) 00:14:21.482 fused_ordering(397) 00:14:21.482 fused_ordering(398) 00:14:21.482 fused_ordering(399) 00:14:21.482 fused_ordering(400) 00:14:21.482 fused_ordering(401) 00:14:21.482 fused_ordering(402) 00:14:21.482 fused_ordering(403) 00:14:21.482 fused_ordering(404) 00:14:21.482 fused_ordering(405) 00:14:21.482 fused_ordering(406) 00:14:21.482 fused_ordering(407) 00:14:21.482 fused_ordering(408) 00:14:21.482 fused_ordering(409) 00:14:21.482 fused_ordering(410) 00:14:22.048 fused_ordering(411) 00:14:22.048 fused_ordering(412) 00:14:22.048 fused_ordering(413) 00:14:22.048 fused_ordering(414) 00:14:22.048 fused_ordering(415) 00:14:22.048 fused_ordering(416) 00:14:22.048 fused_ordering(417) 00:14:22.048 fused_ordering(418) 00:14:22.048 fused_ordering(419) 00:14:22.048 fused_ordering(420) 00:14:22.048 fused_ordering(421) 00:14:22.048 fused_ordering(422) 00:14:22.048 fused_ordering(423) 00:14:22.048 fused_ordering(424) 00:14:22.048 fused_ordering(425) 00:14:22.048 fused_ordering(426) 00:14:22.048 fused_ordering(427) 00:14:22.048 fused_ordering(428) 00:14:22.048 fused_ordering(429) 00:14:22.048 fused_ordering(430) 00:14:22.048 fused_ordering(431) 00:14:22.048 fused_ordering(432) 00:14:22.048 fused_ordering(433) 00:14:22.048 fused_ordering(434) 00:14:22.048 fused_ordering(435) 00:14:22.048 fused_ordering(436) 00:14:22.048 fused_ordering(437) 00:14:22.048 fused_ordering(438) 00:14:22.048 fused_ordering(439) 00:14:22.048 fused_ordering(440) 00:14:22.048 fused_ordering(441) 00:14:22.048 fused_ordering(442) 00:14:22.048 fused_ordering(443) 00:14:22.048 fused_ordering(444) 00:14:22.048 fused_ordering(445) 00:14:22.048 fused_ordering(446) 00:14:22.048 fused_ordering(447) 00:14:22.048 fused_ordering(448) 00:14:22.048 fused_ordering(449) 00:14:22.048 fused_ordering(450) 00:14:22.048 fused_ordering(451) 00:14:22.048 fused_ordering(452) 00:14:22.048 fused_ordering(453) 00:14:22.048 fused_ordering(454) 00:14:22.048 fused_ordering(455) 00:14:22.048 fused_ordering(456) 00:14:22.048 fused_ordering(457) 00:14:22.048 fused_ordering(458) 00:14:22.048 fused_ordering(459) 00:14:22.048 fused_ordering(460) 00:14:22.048 fused_ordering(461) 00:14:22.048 fused_ordering(462) 00:14:22.048 fused_ordering(463) 00:14:22.048 fused_ordering(464) 00:14:22.048 fused_ordering(465) 00:14:22.048 fused_ordering(466) 00:14:22.048 fused_ordering(467) 00:14:22.048 fused_ordering(468) 00:14:22.048 fused_ordering(469) 00:14:22.048 fused_ordering(470) 00:14:22.048 fused_ordering(471) 00:14:22.048 fused_ordering(472) 00:14:22.048 fused_ordering(473) 00:14:22.048 fused_ordering(474) 00:14:22.048 fused_ordering(475) 00:14:22.048 fused_ordering(476) 00:14:22.048 fused_ordering(477) 00:14:22.048 fused_ordering(478) 00:14:22.048 fused_ordering(479) 00:14:22.048 fused_ordering(480) 00:14:22.048 fused_ordering(481) 00:14:22.048 fused_ordering(482) 00:14:22.048 fused_ordering(483) 00:14:22.048 fused_ordering(484) 00:14:22.048 fused_ordering(485) 00:14:22.048 fused_ordering(486) 00:14:22.048 fused_ordering(487) 00:14:22.048 fused_ordering(488) 00:14:22.048 fused_ordering(489) 00:14:22.048 fused_ordering(490) 00:14:22.048 fused_ordering(491) 00:14:22.048 fused_ordering(492) 00:14:22.048 fused_ordering(493) 00:14:22.048 fused_ordering(494) 00:14:22.048 fused_ordering(495) 00:14:22.048 fused_ordering(496) 00:14:22.048 fused_ordering(497) 00:14:22.048 fused_ordering(498) 00:14:22.048 fused_ordering(499) 00:14:22.048 fused_ordering(500) 00:14:22.048 fused_ordering(501) 00:14:22.048 fused_ordering(502) 00:14:22.048 fused_ordering(503) 00:14:22.048 fused_ordering(504) 00:14:22.048 fused_ordering(505) 00:14:22.048 fused_ordering(506) 00:14:22.048 fused_ordering(507) 00:14:22.048 fused_ordering(508) 00:14:22.048 fused_ordering(509) 00:14:22.048 fused_ordering(510) 00:14:22.048 fused_ordering(511) 00:14:22.048 fused_ordering(512) 00:14:22.048 fused_ordering(513) 00:14:22.048 fused_ordering(514) 00:14:22.048 fused_ordering(515) 00:14:22.048 fused_ordering(516) 00:14:22.048 fused_ordering(517) 00:14:22.048 fused_ordering(518) 00:14:22.048 fused_ordering(519) 00:14:22.048 fused_ordering(520) 00:14:22.048 fused_ordering(521) 00:14:22.048 fused_ordering(522) 00:14:22.048 fused_ordering(523) 00:14:22.048 fused_ordering(524) 00:14:22.048 fused_ordering(525) 00:14:22.048 fused_ordering(526) 00:14:22.048 fused_ordering(527) 00:14:22.048 fused_ordering(528) 00:14:22.048 fused_ordering(529) 00:14:22.048 fused_ordering(530) 00:14:22.048 fused_ordering(531) 00:14:22.048 fused_ordering(532) 00:14:22.048 fused_ordering(533) 00:14:22.048 fused_ordering(534) 00:14:22.048 fused_ordering(535) 00:14:22.048 fused_ordering(536) 00:14:22.048 fused_ordering(537) 00:14:22.048 fused_ordering(538) 00:14:22.048 fused_ordering(539) 00:14:22.048 fused_ordering(540) 00:14:22.048 fused_ordering(541) 00:14:22.048 fused_ordering(542) 00:14:22.048 fused_ordering(543) 00:14:22.048 fused_ordering(544) 00:14:22.048 fused_ordering(545) 00:14:22.048 fused_ordering(546) 00:14:22.048 fused_ordering(547) 00:14:22.048 fused_ordering(548) 00:14:22.048 fused_ordering(549) 00:14:22.048 fused_ordering(550) 00:14:22.048 fused_ordering(551) 00:14:22.048 fused_ordering(552) 00:14:22.048 fused_ordering(553) 00:14:22.048 fused_ordering(554) 00:14:22.048 fused_ordering(555) 00:14:22.048 fused_ordering(556) 00:14:22.048 fused_ordering(557) 00:14:22.048 fused_ordering(558) 00:14:22.048 fused_ordering(559) 00:14:22.048 fused_ordering(560) 00:14:22.048 fused_ordering(561) 00:14:22.048 fused_ordering(562) 00:14:22.048 fused_ordering(563) 00:14:22.048 fused_ordering(564) 00:14:22.048 fused_ordering(565) 00:14:22.048 fused_ordering(566) 00:14:22.048 fused_ordering(567) 00:14:22.048 fused_ordering(568) 00:14:22.048 fused_ordering(569) 00:14:22.048 fused_ordering(570) 00:14:22.048 fused_ordering(571) 00:14:22.048 fused_ordering(572) 00:14:22.048 fused_ordering(573) 00:14:22.048 fused_ordering(574) 00:14:22.048 fused_ordering(575) 00:14:22.048 fused_ordering(576) 00:14:22.048 fused_ordering(577) 00:14:22.048 fused_ordering(578) 00:14:22.048 fused_ordering(579) 00:14:22.048 fused_ordering(580) 00:14:22.048 fused_ordering(581) 00:14:22.048 fused_ordering(582) 00:14:22.048 fused_ordering(583) 00:14:22.048 fused_ordering(584) 00:14:22.048 fused_ordering(585) 00:14:22.048 fused_ordering(586) 00:14:22.048 fused_ordering(587) 00:14:22.048 fused_ordering(588) 00:14:22.048 fused_ordering(589) 00:14:22.048 fused_ordering(590) 00:14:22.048 fused_ordering(591) 00:14:22.048 fused_ordering(592) 00:14:22.048 fused_ordering(593) 00:14:22.048 fused_ordering(594) 00:14:22.048 fused_ordering(595) 00:14:22.048 fused_ordering(596) 00:14:22.048 fused_ordering(597) 00:14:22.048 fused_ordering(598) 00:14:22.048 fused_ordering(599) 00:14:22.048 fused_ordering(600) 00:14:22.048 fused_ordering(601) 00:14:22.048 fused_ordering(602) 00:14:22.048 fused_ordering(603) 00:14:22.048 fused_ordering(604) 00:14:22.048 fused_ordering(605) 00:14:22.048 fused_ordering(606) 00:14:22.048 fused_ordering(607) 00:14:22.048 fused_ordering(608) 00:14:22.048 fused_ordering(609) 00:14:22.048 fused_ordering(610) 00:14:22.048 fused_ordering(611) 00:14:22.048 fused_ordering(612) 00:14:22.048 fused_ordering(613) 00:14:22.048 fused_ordering(614) 00:14:22.048 fused_ordering(615) 00:14:23.019 fused_ordering(616) 00:14:23.019 fused_ordering(617) 00:14:23.019 fused_ordering(618) 00:14:23.019 fused_ordering(619) 00:14:23.019 fused_ordering(620) 00:14:23.019 fused_ordering(621) 00:14:23.019 fused_ordering(622) 00:14:23.019 fused_ordering(623) 00:14:23.019 fused_ordering(624) 00:14:23.019 fused_ordering(625) 00:14:23.019 fused_ordering(626) 00:14:23.019 fused_ordering(627) 00:14:23.019 fused_ordering(628) 00:14:23.019 fused_ordering(629) 00:14:23.019 fused_ordering(630) 00:14:23.019 fused_ordering(631) 00:14:23.019 fused_ordering(632) 00:14:23.019 fused_ordering(633) 00:14:23.019 fused_ordering(634) 00:14:23.019 fused_ordering(635) 00:14:23.019 fused_ordering(636) 00:14:23.019 fused_ordering(637) 00:14:23.019 fused_ordering(638) 00:14:23.019 fused_ordering(639) 00:14:23.019 fused_ordering(640) 00:14:23.019 fused_ordering(641) 00:14:23.019 fused_ordering(642) 00:14:23.019 fused_ordering(643) 00:14:23.019 fused_ordering(644) 00:14:23.019 fused_ordering(645) 00:14:23.019 fused_ordering(646) 00:14:23.019 fused_ordering(647) 00:14:23.019 fused_ordering(648) 00:14:23.019 fused_ordering(649) 00:14:23.019 fused_ordering(650) 00:14:23.019 fused_ordering(651) 00:14:23.019 fused_ordering(652) 00:14:23.019 fused_ordering(653) 00:14:23.019 fused_ordering(654) 00:14:23.019 fused_ordering(655) 00:14:23.019 fused_ordering(656) 00:14:23.019 fused_ordering(657) 00:14:23.019 fused_ordering(658) 00:14:23.019 fused_ordering(659) 00:14:23.019 fused_ordering(660) 00:14:23.019 fused_ordering(661) 00:14:23.019 fused_ordering(662) 00:14:23.019 fused_ordering(663) 00:14:23.019 fused_ordering(664) 00:14:23.019 fused_ordering(665) 00:14:23.019 fused_ordering(666) 00:14:23.019 fused_ordering(667) 00:14:23.019 fused_ordering(668) 00:14:23.019 fused_ordering(669) 00:14:23.019 fused_ordering(670) 00:14:23.020 fused_ordering(671) 00:14:23.020 fused_ordering(672) 00:14:23.020 fused_ordering(673) 00:14:23.020 fused_ordering(674) 00:14:23.020 fused_ordering(675) 00:14:23.020 fused_ordering(676) 00:14:23.020 fused_ordering(677) 00:14:23.020 fused_ordering(678) 00:14:23.020 fused_ordering(679) 00:14:23.020 fused_ordering(680) 00:14:23.020 fused_ordering(681) 00:14:23.020 fused_ordering(682) 00:14:23.020 fused_ordering(683) 00:14:23.020 fused_ordering(684) 00:14:23.020 fused_ordering(685) 00:14:23.020 fused_ordering(686) 00:14:23.020 fused_ordering(687) 00:14:23.020 fused_ordering(688) 00:14:23.020 fused_ordering(689) 00:14:23.020 fused_ordering(690) 00:14:23.020 fused_ordering(691) 00:14:23.020 fused_ordering(692) 00:14:23.020 fused_ordering(693) 00:14:23.020 fused_ordering(694) 00:14:23.020 fused_ordering(695) 00:14:23.020 fused_ordering(696) 00:14:23.020 fused_ordering(697) 00:14:23.020 fused_ordering(698) 00:14:23.020 fused_ordering(699) 00:14:23.020 fused_ordering(700) 00:14:23.020 fused_ordering(701) 00:14:23.020 fused_ordering(702) 00:14:23.020 fused_ordering(703) 00:14:23.020 fused_ordering(704) 00:14:23.020 fused_ordering(705) 00:14:23.020 fused_ordering(706) 00:14:23.020 fused_ordering(707) 00:14:23.020 fused_ordering(708) 00:14:23.020 fused_ordering(709) 00:14:23.020 fused_ordering(710) 00:14:23.020 fused_ordering(711) 00:14:23.020 fused_ordering(712) 00:14:23.020 fused_ordering(713) 00:14:23.020 fused_ordering(714) 00:14:23.020 fused_ordering(715) 00:14:23.020 fused_ordering(716) 00:14:23.020 fused_ordering(717) 00:14:23.020 fused_ordering(718) 00:14:23.020 fused_ordering(719) 00:14:23.020 fused_ordering(720) 00:14:23.020 fused_ordering(721) 00:14:23.020 fused_ordering(722) 00:14:23.020 fused_ordering(723) 00:14:23.020 fused_ordering(724) 00:14:23.020 fused_ordering(725) 00:14:23.020 fused_ordering(726) 00:14:23.020 fused_ordering(727) 00:14:23.020 fused_ordering(728) 00:14:23.020 fused_ordering(729) 00:14:23.020 fused_ordering(730) 00:14:23.020 fused_ordering(731) 00:14:23.020 fused_ordering(732) 00:14:23.020 fused_ordering(733) 00:14:23.020 fused_ordering(734) 00:14:23.020 fused_ordering(735) 00:14:23.020 fused_ordering(736) 00:14:23.020 fused_ordering(737) 00:14:23.020 fused_ordering(738) 00:14:23.020 fused_ordering(739) 00:14:23.020 fused_ordering(740) 00:14:23.020 fused_ordering(741) 00:14:23.020 fused_ordering(742) 00:14:23.020 fused_ordering(743) 00:14:23.020 fused_ordering(744) 00:14:23.020 fused_ordering(745) 00:14:23.020 fused_ordering(746) 00:14:23.020 fused_ordering(747) 00:14:23.020 fused_ordering(748) 00:14:23.020 fused_ordering(749) 00:14:23.020 fused_ordering(750) 00:14:23.020 fused_ordering(751) 00:14:23.020 fused_ordering(752) 00:14:23.020 fused_ordering(753) 00:14:23.020 fused_ordering(754) 00:14:23.020 fused_ordering(755) 00:14:23.020 fused_ordering(756) 00:14:23.020 fused_ordering(757) 00:14:23.020 fused_ordering(758) 00:14:23.020 fused_ordering(759) 00:14:23.020 fused_ordering(760) 00:14:23.020 fused_ordering(761) 00:14:23.020 fused_ordering(762) 00:14:23.020 fused_ordering(763) 00:14:23.020 fused_ordering(764) 00:14:23.020 fused_ordering(765) 00:14:23.020 fused_ordering(766) 00:14:23.020 fused_ordering(767) 00:14:23.020 fused_ordering(768) 00:14:23.020 fused_ordering(769) 00:14:23.020 fused_ordering(770) 00:14:23.020 fused_ordering(771) 00:14:23.020 fused_ordering(772) 00:14:23.020 fused_ordering(773) 00:14:23.020 fused_ordering(774) 00:14:23.020 fused_ordering(775) 00:14:23.020 fused_ordering(776) 00:14:23.020 fused_ordering(777) 00:14:23.020 fused_ordering(778) 00:14:23.020 fused_ordering(779) 00:14:23.020 fused_ordering(780) 00:14:23.020 fused_ordering(781) 00:14:23.020 fused_ordering(782) 00:14:23.020 fused_ordering(783) 00:14:23.020 fused_ordering(784) 00:14:23.020 fused_ordering(785) 00:14:23.020 fused_ordering(786) 00:14:23.020 fused_ordering(787) 00:14:23.020 fused_ordering(788) 00:14:23.020 fused_ordering(789) 00:14:23.020 fused_ordering(790) 00:14:23.020 fused_ordering(791) 00:14:23.020 fused_ordering(792) 00:14:23.020 fused_ordering(793) 00:14:23.020 fused_ordering(794) 00:14:23.020 fused_ordering(795) 00:14:23.020 fused_ordering(796) 00:14:23.020 fused_ordering(797) 00:14:23.020 fused_ordering(798) 00:14:23.020 fused_ordering(799) 00:14:23.020 fused_ordering(800) 00:14:23.020 fused_ordering(801) 00:14:23.020 fused_ordering(802) 00:14:23.020 fused_ordering(803) 00:14:23.020 fused_ordering(804) 00:14:23.020 fused_ordering(805) 00:14:23.020 fused_ordering(806) 00:14:23.020 fused_ordering(807) 00:14:23.020 fused_ordering(808) 00:14:23.020 fused_ordering(809) 00:14:23.020 fused_ordering(810) 00:14:23.020 fused_ordering(811) 00:14:23.020 fused_ordering(812) 00:14:23.020 fused_ordering(813) 00:14:23.020 fused_ordering(814) 00:14:23.020 fused_ordering(815) 00:14:23.020 fused_ordering(816) 00:14:23.020 fused_ordering(817) 00:14:23.020 fused_ordering(818) 00:14:23.020 fused_ordering(819) 00:14:23.020 fused_ordering(820) 00:14:23.589 fused_ordering(821) 00:14:23.589 fused_ordering(822) 00:14:23.589 fused_ordering(823) 00:14:23.589 fused_ordering(824) 00:14:23.589 fused_ordering(825) 00:14:23.589 fused_ordering(826) 00:14:23.589 fused_ordering(827) 00:14:23.589 fused_ordering(828) 00:14:23.589 fused_ordering(829) 00:14:23.589 fused_ordering(830) 00:14:23.589 fused_ordering(831) 00:14:23.589 fused_ordering(832) 00:14:23.589 fused_ordering(833) 00:14:23.589 fused_ordering(834) 00:14:23.589 fused_ordering(835) 00:14:23.589 fused_ordering(836) 00:14:23.589 fused_ordering(837) 00:14:23.589 fused_ordering(838) 00:14:23.589 fused_ordering(839) 00:14:23.589 fused_ordering(840) 00:14:23.589 fused_ordering(841) 00:14:23.589 fused_ordering(842) 00:14:23.589 fused_ordering(843) 00:14:23.589 fused_ordering(844) 00:14:23.589 fused_ordering(845) 00:14:23.589 fused_ordering(846) 00:14:23.589 fused_ordering(847) 00:14:23.589 fused_ordering(848) 00:14:23.589 fused_ordering(849) 00:14:23.589 fused_ordering(850) 00:14:23.589 fused_ordering(851) 00:14:23.589 fused_ordering(852) 00:14:23.589 fused_ordering(853) 00:14:23.589 fused_ordering(854) 00:14:23.589 fused_ordering(855) 00:14:23.589 fused_ordering(856) 00:14:23.589 fused_ordering(857) 00:14:23.589 fused_ordering(858) 00:14:23.589 fused_ordering(859) 00:14:23.589 fused_ordering(860) 00:14:23.589 fused_ordering(861) 00:14:23.589 fused_ordering(862) 00:14:23.589 fused_ordering(863) 00:14:23.589 fused_ordering(864) 00:14:23.589 fused_ordering(865) 00:14:23.589 fused_ordering(866) 00:14:23.589 fused_ordering(867) 00:14:23.589 fused_ordering(868) 00:14:23.589 fused_ordering(869) 00:14:23.589 fused_ordering(870) 00:14:23.589 fused_ordering(871) 00:14:23.589 fused_ordering(872) 00:14:23.589 fused_ordering(873) 00:14:23.589 fused_ordering(874) 00:14:23.589 fused_ordering(875) 00:14:23.589 fused_ordering(876) 00:14:23.589 fused_ordering(877) 00:14:23.589 fused_ordering(878) 00:14:23.589 fused_ordering(879) 00:14:23.589 fused_ordering(880) 00:14:23.589 fused_ordering(881) 00:14:23.589 fused_ordering(882) 00:14:23.589 fused_ordering(883) 00:14:23.589 fused_ordering(884) 00:14:23.589 fused_ordering(885) 00:14:23.589 fused_ordering(886) 00:14:23.589 fused_ordering(887) 00:14:23.589 fused_ordering(888) 00:14:23.589 fused_ordering(889) 00:14:23.589 fused_ordering(890) 00:14:23.589 fused_ordering(891) 00:14:23.589 fused_ordering(892) 00:14:23.589 fused_ordering(893) 00:14:23.589 fused_ordering(894) 00:14:23.589 fused_ordering(895) 00:14:23.589 fused_ordering(896) 00:14:23.589 fused_ordering(897) 00:14:23.589 fused_ordering(898) 00:14:23.589 fused_ordering(899) 00:14:23.589 fused_ordering(900) 00:14:23.589 fused_ordering(901) 00:14:23.589 fused_ordering(902) 00:14:23.589 fused_ordering(903) 00:14:23.589 fused_ordering(904) 00:14:23.589 fused_ordering(905) 00:14:23.589 fused_ordering(906) 00:14:23.589 fused_ordering(907) 00:14:23.589 fused_ordering(908) 00:14:23.589 fused_ordering(909) 00:14:23.589 fused_ordering(910) 00:14:23.589 fused_ordering(911) 00:14:23.589 fused_ordering(912) 00:14:23.589 fused_ordering(913) 00:14:23.589 fused_ordering(914) 00:14:23.589 fused_ordering(915) 00:14:23.589 fused_ordering(916) 00:14:23.589 fused_ordering(917) 00:14:23.589 fused_ordering(918) 00:14:23.589 fused_ordering(919) 00:14:23.589 fused_ordering(920) 00:14:23.589 fused_ordering(921) 00:14:23.589 fused_ordering(922) 00:14:23.589 fused_ordering(923) 00:14:23.589 fused_ordering(924) 00:14:23.589 fused_ordering(925) 00:14:23.589 fused_ordering(926) 00:14:23.589 fused_ordering(927) 00:14:23.589 fused_ordering(928) 00:14:23.589 fused_ordering(929) 00:14:23.589 fused_ordering(930) 00:14:23.589 fused_ordering(931) 00:14:23.589 fused_ordering(932) 00:14:23.589 fused_ordering(933) 00:14:23.589 fused_ordering(934) 00:14:23.589 fused_ordering(935) 00:14:23.589 fused_ordering(936) 00:14:23.589 fused_ordering(937) 00:14:23.589 fused_ordering(938) 00:14:23.589 fused_ordering(939) 00:14:23.589 fused_ordering(940) 00:14:23.589 fused_ordering(941) 00:14:23.589 fused_ordering(942) 00:14:23.589 fused_ordering(943) 00:14:23.589 fused_ordering(944) 00:14:23.589 fused_ordering(945) 00:14:23.589 fused_ordering(946) 00:14:23.589 fused_ordering(947) 00:14:23.589 fused_ordering(948) 00:14:23.589 fused_ordering(949) 00:14:23.589 fused_ordering(950) 00:14:23.589 fused_ordering(951) 00:14:23.589 fused_ordering(952) 00:14:23.589 fused_ordering(953) 00:14:23.589 fused_ordering(954) 00:14:23.589 fused_ordering(955) 00:14:23.589 fused_ordering(956) 00:14:23.589 fused_ordering(957) 00:14:23.589 fused_ordering(958) 00:14:23.589 fused_ordering(959) 00:14:23.589 fused_ordering(960) 00:14:23.589 fused_ordering(961) 00:14:23.589 fused_ordering(962) 00:14:23.589 fused_ordering(963) 00:14:23.589 fused_ordering(964) 00:14:23.589 fused_ordering(965) 00:14:23.589 fused_ordering(966) 00:14:23.589 fused_ordering(967) 00:14:23.590 fused_ordering(968) 00:14:23.590 fused_ordering(969) 00:14:23.590 fused_ordering(970) 00:14:23.590 fused_ordering(971) 00:14:23.590 fused_ordering(972) 00:14:23.590 fused_ordering(973) 00:14:23.590 fused_ordering(974) 00:14:23.590 fused_ordering(975) 00:14:23.590 fused_ordering(976) 00:14:23.590 fused_ordering(977) 00:14:23.590 fused_ordering(978) 00:14:23.590 fused_ordering(979) 00:14:23.590 fused_ordering(980) 00:14:23.590 fused_ordering(981) 00:14:23.590 fused_ordering(982) 00:14:23.590 fused_ordering(983) 00:14:23.590 fused_ordering(984) 00:14:23.590 fused_ordering(985) 00:14:23.590 fused_ordering(986) 00:14:23.590 fused_ordering(987) 00:14:23.590 fused_ordering(988) 00:14:23.590 fused_ordering(989) 00:14:23.590 fused_ordering(990) 00:14:23.590 fused_ordering(991) 00:14:23.590 fused_ordering(992) 00:14:23.590 fused_ordering(993) 00:14:23.590 fused_ordering(994) 00:14:23.590 fused_ordering(995) 00:14:23.590 fused_ordering(996) 00:14:23.590 fused_ordering(997) 00:14:23.590 fused_ordering(998) 00:14:23.590 fused_ordering(999) 00:14:23.590 fused_ordering(1000) 00:14:23.590 fused_ordering(1001) 00:14:23.590 fused_ordering(1002) 00:14:23.590 fused_ordering(1003) 00:14:23.590 fused_ordering(1004) 00:14:23.590 fused_ordering(1005) 00:14:23.590 fused_ordering(1006) 00:14:23.590 fused_ordering(1007) 00:14:23.590 fused_ordering(1008) 00:14:23.590 fused_ordering(1009) 00:14:23.590 fused_ordering(1010) 00:14:23.590 fused_ordering(1011) 00:14:23.590 fused_ordering(1012) 00:14:23.590 fused_ordering(1013) 00:14:23.590 fused_ordering(1014) 00:14:23.590 fused_ordering(1015) 00:14:23.590 fused_ordering(1016) 00:14:23.590 fused_ordering(1017) 00:14:23.590 fused_ordering(1018) 00:14:23.590 fused_ordering(1019) 00:14:23.590 fused_ordering(1020) 00:14:23.590 fused_ordering(1021) 00:14:23.590 fused_ordering(1022) 00:14:23.590 fused_ordering(1023) 00:14:23.590 20:20:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:23.590 20:20:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:23.590 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:23.590 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:23.590 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:23.590 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:23.590 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:23.590 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:23.590 rmmod nvme_tcp 00:14:23.590 rmmod nvme_fabrics 00:14:23.590 rmmod nvme_keyring 00:14:23.590 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 4002340 ']' 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 4002340 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 4002340 ']' 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 4002340 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4002340 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4002340' 00:14:23.850 killing process with pid 4002340 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 4002340 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 4002340 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.850 20:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.392 20:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:26.392 00:14:26.392 real 0m8.274s 00:14:26.392 user 0m5.906s 00:14:26.392 sys 0m4.009s 00:14:26.392 20:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:26.392 20:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:26.392 ************************************ 00:14:26.392 END TEST nvmf_fused_ordering 00:14:26.392 ************************************ 00:14:26.392 20:20:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:26.393 20:20:04 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:26.393 20:20:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:26.393 20:20:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.393 20:20:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:26.393 ************************************ 00:14:26.393 START TEST nvmf_delete_subsystem 00:14:26.393 ************************************ 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:26.393 * Looking for test storage... 00:14:26.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:26.393 20:20:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:28.297 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:28.297 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:28.297 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:28.297 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:28.298 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:28.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:14:28.298 00:14:28.298 --- 10.0.0.2 ping statistics --- 00:14:28.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.298 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:28.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:14:28.298 00:14:28.298 --- 10.0.0.1 ping statistics --- 00:14:28.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.298 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=4004708 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 4004708 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 4004708 ']' 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.298 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.298 [2024-07-15 20:20:06.620936] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:14:28.298 [2024-07-15 20:20:06.621007] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.298 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.298 [2024-07-15 20:20:06.687941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:28.298 [2024-07-15 20:20:06.778661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.298 [2024-07-15 20:20:06.778722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.298 [2024-07-15 20:20:06.778738] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.298 [2024-07-15 20:20:06.778751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.298 [2024-07-15 20:20:06.778763] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.298 [2024-07-15 20:20:06.778844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.298 [2024-07-15 20:20:06.778849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.557 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.557 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:14:28.557 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.557 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:28.557 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.557 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.557 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:28.557 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.557 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.557 [2024-07-15 20:20:06.930015] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.558 [2024-07-15 20:20:06.946235] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.558 NULL1 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.558 Delay0 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4004745 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:28.558 20:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:28.558 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.558 [2024-07-15 20:20:07.030959] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:30.457 20:20:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.457 20:20:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.457 20:20:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 starting I/O failed: -6 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 starting I/O failed: -6 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 starting I/O failed: -6 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Write completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 starting I/O failed: -6 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 starting I/O failed: -6 00:14:30.715 Write completed with error (sct=0, sc=8) 00:14:30.715 Write completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Write completed with error (sct=0, sc=8) 00:14:30.715 starting I/O failed: -6 00:14:30.715 Write completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 starting I/O failed: -6 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Write completed with error (sct=0, sc=8) 00:14:30.715 Write completed with error (sct=0, sc=8) 00:14:30.715 starting I/O failed: -6 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.715 Read completed with error (sct=0, sc=8) 00:14:30.716 starting I/O failed: -6 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 [2024-07-15 20:20:09.161822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb03800d2f0 is same with the state(5) to be set 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 starting I/O failed: -6 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 starting I/O failed: -6 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 starting I/O failed: -6 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 starting I/O failed: -6 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 starting I/O failed: -6 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 starting I/O failed: -6 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 starting I/O failed: -6 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 starting I/O failed: -6 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 starting I/O failed: -6 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 starting I/O failed: -6 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 [2024-07-15 20:20:09.162394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0970 is same with the state(5) to be set 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:30.716 [2024-07-15 20:20:09.162822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb038000c00 is same with the state(5) to be set 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Read completed with error (sct=0, sc=8) 00:14:30.716 Write completed with error (sct=0, sc=8) 00:14:31.649 [2024-07-15 20:20:10.130117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dea30 is same with the state(5) to be set 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 [2024-07-15 20:20:10.161526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb03800cfe0 is same with the state(5) to be set 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 [2024-07-15 20:20:10.161744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0e30 is same with the state(5) to be set 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 [2024-07-15 20:20:10.161988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb03800d600 is same with the state(5) to be set 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.649 Write completed with error (sct=0, sc=8) 00:14:31.649 Read completed with error (sct=0, sc=8) 00:14:31.650 Write completed with error (sct=0, sc=8) 00:14:31.650 Read completed with error (sct=0, sc=8) 00:14:31.650 Read completed with error (sct=0, sc=8) 00:14:31.650 Read completed with error (sct=0, sc=8) 00:14:31.650 Read completed with error (sct=0, sc=8) 00:14:31.650 Read completed with error (sct=0, sc=8) 00:14:31.650 Write completed with error (sct=0, sc=8) 00:14:31.650 Read completed with error (sct=0, sc=8) 00:14:31.650 Read completed with error (sct=0, sc=8) 00:14:31.650 Read completed with error (sct=0, sc=8) 00:14:31.650 Write completed with error (sct=0, sc=8) 00:14:31.650 Read completed with error (sct=0, sc=8) 00:14:31.650 [2024-07-15 20:20:10.164702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:14:31.650 Initializing NVMe Controllers 00:14:31.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:31.650 Controller IO queue size 128, less than required. 00:14:31.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:31.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:31.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:31.650 Initialization complete. Launching workers. 00:14:31.650 ======================================================== 00:14:31.650 Latency(us) 00:14:31.650 Device Information : IOPS MiB/s Average min max 00:14:31.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.77 0.08 926980.59 529.11 2004239.09 00:14:31.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 148.89 0.07 962878.41 1001.78 2000880.03 00:14:31.650 ======================================================== 00:14:31.650 Total : 312.66 0.15 944074.79 529.11 2004239.09 00:14:31.650 00:14:31.650 [2024-07-15 20:20:10.165696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21dea30 (9): Bad file descriptor 00:14:31.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:31.650 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.650 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:31.650 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4004745 00:14:31.650 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4004745 00:14:32.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4004745) - No such process 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4004745 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 4004745 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 4004745 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:32.215 [2024-07-15 20:20:10.690061] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4005259 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4005259 00:14:32.215 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:32.216 20:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:32.216 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.474 [2024-07-15 20:20:10.752681] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:32.732 20:20:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:32.732 20:20:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4005259 00:14:32.732 20:20:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:33.297 20:20:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:33.297 20:20:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4005259 00:14:33.297 20:20:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:33.863 20:20:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:33.863 20:20:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4005259 00:14:33.863 20:20:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:34.428 20:20:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:34.428 20:20:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4005259 00:14:34.428 20:20:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:34.992 20:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:34.992 20:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4005259 00:14:34.992 20:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:35.249 20:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:35.249 20:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4005259 00:14:35.249 20:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:35.507 Initializing NVMe Controllers 00:14:35.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.507 Controller IO queue size 128, less than required. 00:14:35.507 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:35.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:35.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:35.507 Initialization complete. Launching workers. 00:14:35.507 ======================================================== 00:14:35.507 Latency(us) 00:14:35.507 Device Information : IOPS MiB/s Average min max 00:14:35.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004355.71 1000214.65 1011595.49 00:14:35.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004634.68 1000238.63 1042182.22 00:14:35.507 ======================================================== 00:14:35.507 Total : 256.00 0.12 1004495.20 1000214.65 1042182.22 00:14:35.507 00:14:35.765 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:35.765 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4005259 00:14:35.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4005259) - No such process 00:14:35.765 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4005259 00:14:35.765 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:35.765 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:35.765 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:35.765 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:35.765 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.765 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:35.765 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.765 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.765 rmmod nvme_tcp 00:14:35.765 rmmod nvme_fabrics 00:14:35.765 rmmod nvme_keyring 00:14:35.765 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 4004708 ']' 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 4004708 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 4004708 ']' 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 4004708 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4004708 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4004708' 00:14:36.023 killing process with pid 4004708 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 4004708 00:14:36.023 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 4004708 00:14:36.282 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:36.282 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:36.282 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:36.282 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.282 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:36.282 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.282 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.282 20:20:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.188 20:20:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:38.188 00:14:38.188 real 0m12.140s 00:14:38.188 user 0m27.615s 00:14:38.188 sys 0m2.959s 00:14:38.188 20:20:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:38.188 20:20:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.188 ************************************ 00:14:38.188 END TEST nvmf_delete_subsystem 00:14:38.188 ************************************ 00:14:38.188 20:20:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:38.188 20:20:16 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:38.188 20:20:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:38.188 20:20:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.188 20:20:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:38.188 ************************************ 00:14:38.188 START TEST nvmf_ns_masking 00:14:38.188 ************************************ 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:38.188 * Looking for test storage... 00:14:38.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.188 20:20:16 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d1c16f1b-4e5f-4f1c-85da-565e55ff3140 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3c55ddc9-680c-437c-8b91-8be18d533e07 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6f828f8c-aa74-49fa-ad25-70eef5658888 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:38.512 20:20:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:40.414 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:40.414 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:40.414 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.414 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:40.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:40.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:14:40.415 00:14:40.415 --- 10.0.0.2 ping statistics --- 00:14:40.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.415 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:40.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:14:40.415 00:14:40.415 --- 10.0.0.1 ping statistics --- 00:14:40.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.415 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=4007600 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 4007600 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4007600 ']' 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.415 20:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:40.415 [2024-07-15 20:20:18.939473] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:14:40.415 [2024-07-15 20:20:18.939558] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.674 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.674 [2024-07-15 20:20:19.007977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.674 [2024-07-15 20:20:19.096070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.674 [2024-07-15 20:20:19.096136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.674 [2024-07-15 20:20:19.096153] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.674 [2024-07-15 20:20:19.096166] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.674 [2024-07-15 20:20:19.096178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.674 [2024-07-15 20:20:19.096209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.932 20:20:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.932 20:20:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:40.932 20:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.932 20:20:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:40.932 20:20:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:40.932 20:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.932 20:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:41.190 [2024-07-15 20:20:19.516858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.190 20:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:41.190 20:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:41.190 20:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:41.449 Malloc1 00:14:41.449 20:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:41.707 Malloc2 00:14:41.707 20:20:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:41.965 20:20:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:42.223 20:20:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.481 [2024-07-15 20:20:20.819388] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.481 20:20:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:42.481 20:20:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6f828f8c-aa74-49fa-ad25-70eef5658888 -a 10.0.0.2 -s 4420 -i 4 00:14:42.739 20:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:42.739 20:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:42.739 20:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.739 20:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:42.739 20:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:44.637 20:20:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:44.637 20:20:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:44.637 20:20:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:44.637 20:20:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:44.637 20:20:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.637 20:20:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:44.637 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:44.637 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:44.637 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:44.638 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:44.638 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:44.638 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.638 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:44.638 [ 0]:0x1 00:14:44.638 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.638 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.897 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb8579713ebb4e08b430b037802b33df 00:14:44.897 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb8579713ebb4e08b430b037802b33df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.897 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:45.157 [ 0]:0x1 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb8579713ebb4e08b430b037802b33df 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb8579713ebb4e08b430b037802b33df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:45.157 [ 1]:0x2 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd88797fb96e49bb80ad30730491abe0 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd88797fb96e49bb80ad30730491abe0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.157 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.415 20:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:45.673 20:20:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:45.673 20:20:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6f828f8c-aa74-49fa-ad25-70eef5658888 -a 10.0.0.2 -s 4420 -i 4 00:14:45.932 20:20:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:45.932 20:20:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:45.932 20:20:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:45.932 20:20:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:45.932 20:20:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:45.932 20:20:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:47.828 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:48.086 [ 0]:0x2 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd88797fb96e49bb80ad30730491abe0 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd88797fb96e49bb80ad30730491abe0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.086 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:48.343 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:48.343 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.343 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:48.343 [ 0]:0x1 00:14:48.343 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.343 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.601 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb8579713ebb4e08b430b037802b33df 00:14:48.601 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb8579713ebb4e08b430b037802b33df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.601 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:48.601 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.601 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:48.601 [ 1]:0x2 00:14:48.601 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:48.601 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.601 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd88797fb96e49bb80ad30730491abe0 00:14:48.601 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd88797fb96e49bb80ad30730491abe0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.601 20:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:48.859 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:48.859 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:48.859 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:48.859 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:48.860 [ 0]:0x2 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd88797fb96e49bb80ad30730491abe0 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd88797fb96e49bb80ad30730491abe0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.860 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:49.425 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:49.425 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6f828f8c-aa74-49fa-ad25-70eef5658888 -a 10.0.0.2 -s 4420 -i 4 00:14:49.425 20:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:49.425 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:49.425 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:49.425 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:49.425 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:49.425 20:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:51.321 [ 0]:0x1 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:51.321 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.579 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb8579713ebb4e08b430b037802b33df 00:14:51.579 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb8579713ebb4e08b430b037802b33df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.579 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:51.579 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.579 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:51.579 [ 1]:0x2 00:14:51.579 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:51.579 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.579 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd88797fb96e49bb80ad30730491abe0 00:14:51.579 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd88797fb96e49bb80ad30730491abe0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.579 20:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:51.837 [ 0]:0x2 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd88797fb96e49bb80ad30730491abe0 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd88797fb96e49bb80ad30730491abe0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:51.837 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:52.095 [2024-07-15 20:20:30.597085] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:52.096 request: 00:14:52.096 { 00:14:52.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.096 "nsid": 2, 00:14:52.096 "host": "nqn.2016-06.io.spdk:host1", 00:14:52.096 "method": "nvmf_ns_remove_host", 00:14:52.096 "req_id": 1 00:14:52.096 } 00:14:52.096 Got JSON-RPC error response 00:14:52.096 response: 00:14:52.096 { 00:14:52.096 "code": -32602, 00:14:52.096 "message": "Invalid parameters" 00:14:52.096 } 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:52.096 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:52.354 [ 0]:0x2 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd88797fb96e49bb80ad30730491abe0 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd88797fb96e49bb80ad30730491abe0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:52.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=4009147 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 4009147 /var/tmp/host.sock 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4009147 ']' 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:52.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.354 20:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:52.612 [2024-07-15 20:20:30.890554] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:14:52.612 [2024-07-15 20:20:30.890632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009147 ] 00:14:52.612 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.612 [2024-07-15 20:20:30.954514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.612 [2024-07-15 20:20:31.048074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.870 20:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.870 20:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:52.870 20:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.128 20:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:53.425 20:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d1c16f1b-4e5f-4f1c-85da-565e55ff3140 00:14:53.426 20:20:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:53.426 20:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D1C16F1B4E5F4F1C85DA565E55FF3140 -i 00:14:53.696 20:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3c55ddc9-680c-437c-8b91-8be18d533e07 00:14:53.696 20:20:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:53.696 20:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3C55DDC9680C437C8B918BE18D533E07 -i 00:14:53.953 20:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:54.209 20:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:54.467 20:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:54.467 20:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:55.030 nvme0n1 00:14:55.030 20:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:55.030 20:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:55.593 nvme1n2 00:14:55.593 20:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:55.593 20:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:55.593 20:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:55.593 20:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:55.593 20:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:55.850 20:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:55.850 20:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:55.850 20:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:55.850 20:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:56.106 20:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d1c16f1b-4e5f-4f1c-85da-565e55ff3140 == \d\1\c\1\6\f\1\b\-\4\e\5\f\-\4\f\1\c\-\8\5\d\a\-\5\6\5\e\5\5\f\f\3\1\4\0 ]] 00:14:56.106 20:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:56.106 20:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:56.106 20:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:56.363 20:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3c55ddc9-680c-437c-8b91-8be18d533e07 == \3\c\5\5\d\d\c\9\-\6\8\0\c\-\4\3\7\c\-\8\b\9\1\-\8\b\e\1\8\d\5\3\3\e\0\7 ]] 00:14:56.363 20:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 4009147 00:14:56.363 20:20:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4009147 ']' 00:14:56.363 20:20:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4009147 00:14:56.363 20:20:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:56.363 20:20:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:56.363 20:20:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4009147 00:14:56.363 20:20:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:56.363 20:20:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:56.363 20:20:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4009147' 00:14:56.363 killing process with pid 4009147 00:14:56.363 20:20:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4009147 00:14:56.363 20:20:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4009147 00:14:56.621 20:20:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:57.185 rmmod nvme_tcp 00:14:57.185 rmmod nvme_fabrics 00:14:57.185 rmmod nvme_keyring 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 4007600 ']' 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 4007600 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4007600 ']' 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4007600 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4007600 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4007600' 00:14:57.185 killing process with pid 4007600 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4007600 00:14:57.185 20:20:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4007600 00:14:57.442 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.442 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:57.442 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:57.442 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.442 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.442 20:20:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.442 20:20:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.442 20:20:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.346 20:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:59.346 00:14:59.346 real 0m21.210s 00:14:59.346 user 0m27.885s 00:14:59.346 sys 0m4.135s 00:14:59.346 20:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.346 20:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:59.346 ************************************ 00:14:59.346 END TEST nvmf_ns_masking 00:14:59.346 ************************************ 00:14:59.621 20:20:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:59.621 20:20:37 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:59.621 20:20:37 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:59.621 20:20:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:59.621 20:20:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.621 20:20:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:59.621 ************************************ 00:14:59.621 START TEST nvmf_nvme_cli 00:14:59.621 ************************************ 00:14:59.621 20:20:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:59.621 * Looking for test storage... 00:14:59.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:59.621 20:20:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.621 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:59.621 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.621 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.621 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.621 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.621 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.621 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.621 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.621 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.621 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.621 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:59.622 20:20:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.524 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.524 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:01.525 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:01.525 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:01.525 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:01.525 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.525 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.783 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.783 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.783 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:01.783 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.783 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.783 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.783 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:01.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:15:01.783 00:15:01.783 --- 10.0.0.2 ping statistics --- 00:15:01.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.783 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:15:01.783 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:15:01.783 00:15:01.783 --- 10.0.0.1 ping statistics --- 00:15:01.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.783 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:15:01.783 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.783 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:01.783 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=4011703 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 4011703 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 4011703 ']' 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.784 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.784 [2024-07-15 20:20:40.237384] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:15:01.784 [2024-07-15 20:20:40.237467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.784 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.784 [2024-07-15 20:20:40.306246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.042 [2024-07-15 20:20:40.403865] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.042 [2024-07-15 20:20:40.403942] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.042 [2024-07-15 20:20:40.403958] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.042 [2024-07-15 20:20:40.403971] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.042 [2024-07-15 20:20:40.403983] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.042 [2024-07-15 20:20:40.404041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.042 [2024-07-15 20:20:40.405899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.042 [2024-07-15 20:20:40.405938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.042 [2024-07-15 20:20:40.405942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.042 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.042 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:15:02.042 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.042 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.042 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.042 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.042 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:02.042 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.042 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.042 [2024-07-15 20:20:40.563988] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.300 Malloc0 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.300 Malloc1 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.300 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.301 [2024-07-15 20:20:40.649710] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:02.301 00:15:02.301 Discovery Log Number of Records 2, Generation counter 2 00:15:02.301 =====Discovery Log Entry 0====== 00:15:02.301 trtype: tcp 00:15:02.301 adrfam: ipv4 00:15:02.301 subtype: current discovery subsystem 00:15:02.301 treq: not required 00:15:02.301 portid: 0 00:15:02.301 trsvcid: 4420 00:15:02.301 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:02.301 traddr: 10.0.0.2 00:15:02.301 eflags: explicit discovery connections, duplicate discovery information 00:15:02.301 sectype: none 00:15:02.301 =====Discovery Log Entry 1====== 00:15:02.301 trtype: tcp 00:15:02.301 adrfam: ipv4 00:15:02.301 subtype: nvme subsystem 00:15:02.301 treq: not required 00:15:02.301 portid: 0 00:15:02.301 trsvcid: 4420 00:15:02.301 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:02.301 traddr: 10.0.0.2 00:15:02.301 eflags: none 00:15:02.301 sectype: none 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:02.301 20:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.234 20:20:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:03.234 20:20:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:03.234 20:20:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.234 20:20:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:03.234 20:20:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:03.234 20:20:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:05.149 /dev/nvme0n1 ]] 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.149 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:05.407 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:05.407 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.407 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:05.407 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.407 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:05.407 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:05.407 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.407 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:05.407 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:05.407 20:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.407 20:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:05.407 20:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.666 rmmod nvme_tcp 00:15:05.666 rmmod nvme_fabrics 00:15:05.666 rmmod nvme_keyring 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 4011703 ']' 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 4011703 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 4011703 ']' 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 4011703 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:05.666 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4011703 00:15:05.924 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:05.924 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:05.924 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4011703' 00:15:05.924 killing process with pid 4011703 00:15:05.924 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 4011703 00:15:05.924 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 4011703 00:15:06.184 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:06.184 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:06.184 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:06.184 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.184 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.184 20:20:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.184 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.184 20:20:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.090 20:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:08.090 00:15:08.090 real 0m8.628s 00:15:08.090 user 0m16.720s 00:15:08.090 sys 0m2.282s 00:15:08.090 20:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.090 20:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.090 ************************************ 00:15:08.090 END TEST nvmf_nvme_cli 00:15:08.090 ************************************ 00:15:08.090 20:20:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:08.090 20:20:46 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:08.090 20:20:46 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:08.090 20:20:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:08.090 20:20:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.090 20:20:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:08.090 ************************************ 00:15:08.090 START TEST nvmf_vfio_user 00:15:08.090 ************************************ 00:15:08.090 20:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:08.348 * Looking for test storage... 00:15:08.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:08.348 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4012528 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4012528' 00:15:08.349 Process pid: 4012528 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4012528 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 4012528 ']' 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.349 20:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:08.349 [2024-07-15 20:20:46.700126] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:15:08.349 [2024-07-15 20:20:46.700240] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.349 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.349 [2024-07-15 20:20:46.762623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:08.349 [2024-07-15 20:20:46.847724] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.349 [2024-07-15 20:20:46.847777] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.349 [2024-07-15 20:20:46.847801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.349 [2024-07-15 20:20:46.847812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.349 [2024-07-15 20:20:46.847822] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.349 [2024-07-15 20:20:46.847978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.349 [2024-07-15 20:20:46.848017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.349 [2024-07-15 20:20:46.848084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.349 [2024-07-15 20:20:46.848087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.608 20:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.608 20:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:08.608 20:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:09.568 20:20:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:09.826 20:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:09.826 20:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:09.827 20:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:09.827 20:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:09.827 20:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:10.085 Malloc1 00:15:10.085 20:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:10.342 20:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:10.600 20:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:10.857 20:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:10.857 20:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:10.857 20:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:11.115 Malloc2 00:15:11.115 20:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:11.373 20:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:11.631 20:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:11.890 20:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:11.890 20:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:11.890 20:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:11.890 20:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:11.890 20:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:11.891 20:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:11.891 [2024-07-15 20:20:50.266321] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:15:11.891 [2024-07-15 20:20:50.266364] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013052 ] 00:15:11.891 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.891 [2024-07-15 20:20:50.301212] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:11.891 [2024-07-15 20:20:50.309815] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:11.891 [2024-07-15 20:20:50.309844] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6d45081000 00:15:11.891 [2024-07-15 20:20:50.310809] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.891 [2024-07-15 20:20:50.311793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.891 [2024-07-15 20:20:50.312796] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.891 [2024-07-15 20:20:50.313804] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.891 [2024-07-15 20:20:50.314812] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.891 [2024-07-15 20:20:50.315813] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.891 [2024-07-15 20:20:50.316821] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.891 [2024-07-15 20:20:50.317824] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.891 [2024-07-15 20:20:50.318830] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:11.891 [2024-07-15 20:20:50.318851] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6d43e35000 00:15:11.891 [2024-07-15 20:20:50.320162] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:11.891 [2024-07-15 20:20:50.338645] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:11.891 [2024-07-15 20:20:50.338681] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:11.891 [2024-07-15 20:20:50.342999] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:11.891 [2024-07-15 20:20:50.343050] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:11.891 [2024-07-15 20:20:50.343144] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:11.891 [2024-07-15 20:20:50.343188] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:11.891 [2024-07-15 20:20:50.343200] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:11.891 [2024-07-15 20:20:50.343991] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:11.891 [2024-07-15 20:20:50.344011] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:11.891 [2024-07-15 20:20:50.344023] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:11.891 [2024-07-15 20:20:50.344992] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:11.891 [2024-07-15 20:20:50.345010] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:11.891 [2024-07-15 20:20:50.345024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:11.891 [2024-07-15 20:20:50.346003] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:11.891 [2024-07-15 20:20:50.346023] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:11.891 [2024-07-15 20:20:50.347012] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:11.891 [2024-07-15 20:20:50.347031] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:11.891 [2024-07-15 20:20:50.347041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:11.891 [2024-07-15 20:20:50.347052] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:11.891 [2024-07-15 20:20:50.347166] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:11.891 [2024-07-15 20:20:50.347190] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:11.891 [2024-07-15 20:20:50.347198] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:11.891 [2024-07-15 20:20:50.348019] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:11.891 [2024-07-15 20:20:50.349022] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:11.891 [2024-07-15 20:20:50.350033] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:11.891 [2024-07-15 20:20:50.351029] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.891 [2024-07-15 20:20:50.351130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:11.891 [2024-07-15 20:20:50.352044] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:11.891 [2024-07-15 20:20:50.352063] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:11.891 [2024-07-15 20:20:50.352072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:11.891 [2024-07-15 20:20:50.352096] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:11.891 [2024-07-15 20:20:50.352109] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:11.891 [2024-07-15 20:20:50.352132] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.891 [2024-07-15 20:20:50.352142] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.891 [2024-07-15 20:20:50.352160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.891 [2024-07-15 20:20:50.352241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:11.891 [2024-07-15 20:20:50.352258] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:11.891 [2024-07-15 20:20:50.352269] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:11.891 [2024-07-15 20:20:50.352277] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:11.891 [2024-07-15 20:20:50.352284] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:11.891 [2024-07-15 20:20:50.352292] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:11.891 [2024-07-15 20:20:50.352299] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:11.891 [2024-07-15 20:20:50.352306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:11.891 [2024-07-15 20:20:50.352318] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:11.891 [2024-07-15 20:20:50.352337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:11.891 [2024-07-15 20:20:50.352358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:11.891 [2024-07-15 20:20:50.352380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.891 [2024-07-15 20:20:50.352394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.891 [2024-07-15 20:20:50.352406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.891 [2024-07-15 20:20:50.352417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.891 [2024-07-15 20:20:50.352426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:11.891 [2024-07-15 20:20:50.352440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:11.891 [2024-07-15 20:20:50.352454] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:11.891 [2024-07-15 20:20:50.352465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:11.891 [2024-07-15 20:20:50.352475] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:11.891 [2024-07-15 20:20:50.352483] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:11.891 [2024-07-15 20:20:50.352493] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:11.891 [2024-07-15 20:20:50.352503] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:11.892 [2024-07-15 20:20:50.352529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:11.892 [2024-07-15 20:20:50.352589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352615] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:11.892 [2024-07-15 20:20:50.352623] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:11.892 [2024-07-15 20:20:50.352632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:11.892 [2024-07-15 20:20:50.352646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:11.892 [2024-07-15 20:20:50.352662] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:11.892 [2024-07-15 20:20:50.352682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352711] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.892 [2024-07-15 20:20:50.352719] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.892 [2024-07-15 20:20:50.352729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.892 [2024-07-15 20:20:50.352751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:11.892 [2024-07-15 20:20:50.352772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352798] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.892 [2024-07-15 20:20:50.352806] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.892 [2024-07-15 20:20:50.352815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.892 [2024-07-15 20:20:50.352827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:11.892 [2024-07-15 20:20:50.352840] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352865] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352925] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:11.892 [2024-07-15 20:20:50.352932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:11.892 [2024-07-15 20:20:50.352940] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:11.892 [2024-07-15 20:20:50.352966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:11.892 [2024-07-15 20:20:50.352985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:11.892 [2024-07-15 20:20:50.353004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:11.892 [2024-07-15 20:20:50.353016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:11.892 [2024-07-15 20:20:50.353032] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:11.892 [2024-07-15 20:20:50.353047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:11.892 [2024-07-15 20:20:50.353067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:11.892 [2024-07-15 20:20:50.353080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:11.892 [2024-07-15 20:20:50.353102] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:11.892 [2024-07-15 20:20:50.353112] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:11.892 [2024-07-15 20:20:50.353118] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:11.892 [2024-07-15 20:20:50.353124] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:11.892 [2024-07-15 20:20:50.353133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:11.892 [2024-07-15 20:20:50.353145] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:11.892 [2024-07-15 20:20:50.353153] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:11.892 [2024-07-15 20:20:50.353162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:11.892 [2024-07-15 20:20:50.353187] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:11.892 [2024-07-15 20:20:50.353195] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.892 [2024-07-15 20:20:50.353204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.892 [2024-07-15 20:20:50.353215] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:11.892 [2024-07-15 20:20:50.353223] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:11.892 [2024-07-15 20:20:50.353231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:11.892 [2024-07-15 20:20:50.353243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:11.892 [2024-07-15 20:20:50.353262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:11.892 [2024-07-15 20:20:50.353279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:11.892 [2024-07-15 20:20:50.353291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:11.892 ===================================================== 00:15:11.892 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:11.892 ===================================================== 00:15:11.892 Controller Capabilities/Features 00:15:11.892 ================================ 00:15:11.892 Vendor ID: 4e58 00:15:11.892 Subsystem Vendor ID: 4e58 00:15:11.892 Serial Number: SPDK1 00:15:11.892 Model Number: SPDK bdev Controller 00:15:11.892 Firmware Version: 24.09 00:15:11.892 Recommended Arb Burst: 6 00:15:11.892 IEEE OUI Identifier: 8d 6b 50 00:15:11.892 Multi-path I/O 00:15:11.892 May have multiple subsystem ports: Yes 00:15:11.892 May have multiple controllers: Yes 00:15:11.892 Associated with SR-IOV VF: No 00:15:11.892 Max Data Transfer Size: 131072 00:15:11.892 Max Number of Namespaces: 32 00:15:11.892 Max Number of I/O Queues: 127 00:15:11.892 NVMe Specification Version (VS): 1.3 00:15:11.892 NVMe Specification Version (Identify): 1.3 00:15:11.892 Maximum Queue Entries: 256 00:15:11.892 Contiguous Queues Required: Yes 00:15:11.892 Arbitration Mechanisms Supported 00:15:11.892 Weighted Round Robin: Not Supported 00:15:11.892 Vendor Specific: Not Supported 00:15:11.892 Reset Timeout: 15000 ms 00:15:11.892 Doorbell Stride: 4 bytes 00:15:11.892 NVM Subsystem Reset: Not Supported 00:15:11.892 Command Sets Supported 00:15:11.893 NVM Command Set: Supported 00:15:11.893 Boot Partition: Not Supported 00:15:11.893 Memory Page Size Minimum: 4096 bytes 00:15:11.893 Memory Page Size Maximum: 4096 bytes 00:15:11.893 Persistent Memory Region: Not Supported 00:15:11.893 Optional Asynchronous Events Supported 00:15:11.893 Namespace Attribute Notices: Supported 00:15:11.893 Firmware Activation Notices: Not Supported 00:15:11.893 ANA Change Notices: Not Supported 00:15:11.893 PLE Aggregate Log Change Notices: Not Supported 00:15:11.893 LBA Status Info Alert Notices: Not Supported 00:15:11.893 EGE Aggregate Log Change Notices: Not Supported 00:15:11.893 Normal NVM Subsystem Shutdown event: Not Supported 00:15:11.893 Zone Descriptor Change Notices: Not Supported 00:15:11.893 Discovery Log Change Notices: Not Supported 00:15:11.893 Controller Attributes 00:15:11.893 128-bit Host Identifier: Supported 00:15:11.893 Non-Operational Permissive Mode: Not Supported 00:15:11.893 NVM Sets: Not Supported 00:15:11.893 Read Recovery Levels: Not Supported 00:15:11.893 Endurance Groups: Not Supported 00:15:11.893 Predictable Latency Mode: Not Supported 00:15:11.893 Traffic Based Keep ALive: Not Supported 00:15:11.893 Namespace Granularity: Not Supported 00:15:11.893 SQ Associations: Not Supported 00:15:11.893 UUID List: Not Supported 00:15:11.893 Multi-Domain Subsystem: Not Supported 00:15:11.893 Fixed Capacity Management: Not Supported 00:15:11.893 Variable Capacity Management: Not Supported 00:15:11.893 Delete Endurance Group: Not Supported 00:15:11.893 Delete NVM Set: Not Supported 00:15:11.893 Extended LBA Formats Supported: Not Supported 00:15:11.893 Flexible Data Placement Supported: Not Supported 00:15:11.893 00:15:11.893 Controller Memory Buffer Support 00:15:11.893 ================================ 00:15:11.893 Supported: No 00:15:11.893 00:15:11.893 Persistent Memory Region Support 00:15:11.893 ================================ 00:15:11.893 Supported: No 00:15:11.893 00:15:11.893 Admin Command Set Attributes 00:15:11.893 ============================ 00:15:11.893 Security Send/Receive: Not Supported 00:15:11.893 Format NVM: Not Supported 00:15:11.893 Firmware Activate/Download: Not Supported 00:15:11.893 Namespace Management: Not Supported 00:15:11.893 Device Self-Test: Not Supported 00:15:11.893 Directives: Not Supported 00:15:11.893 NVMe-MI: Not Supported 00:15:11.893 Virtualization Management: Not Supported 00:15:11.893 Doorbell Buffer Config: Not Supported 00:15:11.893 Get LBA Status Capability: Not Supported 00:15:11.893 Command & Feature Lockdown Capability: Not Supported 00:15:11.893 Abort Command Limit: 4 00:15:11.893 Async Event Request Limit: 4 00:15:11.893 Number of Firmware Slots: N/A 00:15:11.893 Firmware Slot 1 Read-Only: N/A 00:15:11.893 Firmware Activation Without Reset: N/A 00:15:11.893 Multiple Update Detection Support: N/A 00:15:11.893 Firmware Update Granularity: No Information Provided 00:15:11.893 Per-Namespace SMART Log: No 00:15:11.893 Asymmetric Namespace Access Log Page: Not Supported 00:15:11.893 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:11.893 Command Effects Log Page: Supported 00:15:11.893 Get Log Page Extended Data: Supported 00:15:11.893 Telemetry Log Pages: Not Supported 00:15:11.893 Persistent Event Log Pages: Not Supported 00:15:11.893 Supported Log Pages Log Page: May Support 00:15:11.893 Commands Supported & Effects Log Page: Not Supported 00:15:11.893 Feature Identifiers & Effects Log Page:May Support 00:15:11.893 NVMe-MI Commands & Effects Log Page: May Support 00:15:11.893 Data Area 4 for Telemetry Log: Not Supported 00:15:11.893 Error Log Page Entries Supported: 128 00:15:11.893 Keep Alive: Supported 00:15:11.893 Keep Alive Granularity: 10000 ms 00:15:11.893 00:15:11.893 NVM Command Set Attributes 00:15:11.893 ========================== 00:15:11.893 Submission Queue Entry Size 00:15:11.893 Max: 64 00:15:11.893 Min: 64 00:15:11.893 Completion Queue Entry Size 00:15:11.893 Max: 16 00:15:11.893 Min: 16 00:15:11.893 Number of Namespaces: 32 00:15:11.893 Compare Command: Supported 00:15:11.893 Write Uncorrectable Command: Not Supported 00:15:11.893 Dataset Management Command: Supported 00:15:11.893 Write Zeroes Command: Supported 00:15:11.893 Set Features Save Field: Not Supported 00:15:11.893 Reservations: Not Supported 00:15:11.893 Timestamp: Not Supported 00:15:11.893 Copy: Supported 00:15:11.893 Volatile Write Cache: Present 00:15:11.893 Atomic Write Unit (Normal): 1 00:15:11.893 Atomic Write Unit (PFail): 1 00:15:11.893 Atomic Compare & Write Unit: 1 00:15:11.893 Fused Compare & Write: Supported 00:15:11.893 Scatter-Gather List 00:15:11.893 SGL Command Set: Supported (Dword aligned) 00:15:11.893 SGL Keyed: Not Supported 00:15:11.893 SGL Bit Bucket Descriptor: Not Supported 00:15:11.893 SGL Metadata Pointer: Not Supported 00:15:11.893 Oversized SGL: Not Supported 00:15:11.893 SGL Metadata Address: Not Supported 00:15:11.893 SGL Offset: Not Supported 00:15:11.893 Transport SGL Data Block: Not Supported 00:15:11.893 Replay Protected Memory Block: Not Supported 00:15:11.893 00:15:11.893 Firmware Slot Information 00:15:11.893 ========================= 00:15:11.893 Active slot: 1 00:15:11.893 Slot 1 Firmware Revision: 24.09 00:15:11.893 00:15:11.893 00:15:11.893 Commands Supported and Effects 00:15:11.893 ============================== 00:15:11.893 Admin Commands 00:15:11.893 -------------- 00:15:11.893 Get Log Page (02h): Supported 00:15:11.893 Identify (06h): Supported 00:15:11.893 Abort (08h): Supported 00:15:11.893 Set Features (09h): Supported 00:15:11.893 Get Features (0Ah): Supported 00:15:11.893 Asynchronous Event Request (0Ch): Supported 00:15:11.893 Keep Alive (18h): Supported 00:15:11.893 I/O Commands 00:15:11.893 ------------ 00:15:11.893 Flush (00h): Supported LBA-Change 00:15:11.893 Write (01h): Supported LBA-Change 00:15:11.893 Read (02h): Supported 00:15:11.893 Compare (05h): Supported 00:15:11.893 Write Zeroes (08h): Supported LBA-Change 00:15:11.893 Dataset Management (09h): Supported LBA-Change 00:15:11.893 Copy (19h): Supported LBA-Change 00:15:11.893 00:15:11.893 Error Log 00:15:11.893 ========= 00:15:11.893 00:15:11.893 Arbitration 00:15:11.893 =========== 00:15:11.893 Arbitration Burst: 1 00:15:11.893 00:15:11.893 Power Management 00:15:11.893 ================ 00:15:11.893 Number of Power States: 1 00:15:11.893 Current Power State: Power State #0 00:15:11.893 Power State #0: 00:15:11.893 Max Power: 0.00 W 00:15:11.893 Non-Operational State: Operational 00:15:11.893 Entry Latency: Not Reported 00:15:11.893 Exit Latency: Not Reported 00:15:11.893 Relative Read Throughput: 0 00:15:11.893 Relative Read Latency: 0 00:15:11.893 Relative Write Throughput: 0 00:15:11.893 Relative Write Latency: 0 00:15:11.893 Idle Power: Not Reported 00:15:11.893 Active Power: Not Reported 00:15:11.893 Non-Operational Permissive Mode: Not Supported 00:15:11.893 00:15:11.893 Health Information 00:15:11.893 ================== 00:15:11.893 Critical Warnings: 00:15:11.893 Available Spare Space: OK 00:15:11.893 Temperature: OK 00:15:11.893 Device Reliability: OK 00:15:11.893 Read Only: No 00:15:11.893 Volatile Memory Backup: OK 00:15:11.893 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:11.893 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:11.893 Available Spare: 0% 00:15:11.893 Available Sp[2024-07-15 20:20:50.353407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:11.893 [2024-07-15 20:20:50.353423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:11.893 [2024-07-15 20:20:50.353465] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:11.893 [2024-07-15 20:20:50.353482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.893 [2024-07-15 20:20:50.353493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.893 [2024-07-15 20:20:50.353503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.893 [2024-07-15 20:20:50.353512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.893 [2024-07-15 20:20:50.354055] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:11.893 [2024-07-15 20:20:50.354079] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:11.893 [2024-07-15 20:20:50.355054] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.893 [2024-07-15 20:20:50.355129] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:11.893 [2024-07-15 20:20:50.355145] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:11.893 [2024-07-15 20:20:50.356061] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:11.893 [2024-07-15 20:20:50.356084] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:11.893 [2024-07-15 20:20:50.356138] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:11.894 [2024-07-15 20:20:50.359902] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:11.894 are Threshold: 0% 00:15:11.894 Life Percentage Used: 0% 00:15:11.894 Data Units Read: 0 00:15:11.894 Data Units Written: 0 00:15:11.894 Host Read Commands: 0 00:15:11.894 Host Write Commands: 0 00:15:11.894 Controller Busy Time: 0 minutes 00:15:11.894 Power Cycles: 0 00:15:11.894 Power On Hours: 0 hours 00:15:11.894 Unsafe Shutdowns: 0 00:15:11.894 Unrecoverable Media Errors: 0 00:15:11.894 Lifetime Error Log Entries: 0 00:15:11.894 Warning Temperature Time: 0 minutes 00:15:11.894 Critical Temperature Time: 0 minutes 00:15:11.894 00:15:11.894 Number of Queues 00:15:11.894 ================ 00:15:11.894 Number of I/O Submission Queues: 127 00:15:11.894 Number of I/O Completion Queues: 127 00:15:11.894 00:15:11.894 Active Namespaces 00:15:11.894 ================= 00:15:11.894 Namespace ID:1 00:15:11.894 Error Recovery Timeout: Unlimited 00:15:11.894 Command Set Identifier: NVM (00h) 00:15:11.894 Deallocate: Supported 00:15:11.894 Deallocated/Unwritten Error: Not Supported 00:15:11.894 Deallocated Read Value: Unknown 00:15:11.894 Deallocate in Write Zeroes: Not Supported 00:15:11.894 Deallocated Guard Field: 0xFFFF 00:15:11.894 Flush: Supported 00:15:11.894 Reservation: Supported 00:15:11.894 Namespace Sharing Capabilities: Multiple Controllers 00:15:11.894 Size (in LBAs): 131072 (0GiB) 00:15:11.894 Capacity (in LBAs): 131072 (0GiB) 00:15:11.894 Utilization (in LBAs): 131072 (0GiB) 00:15:11.894 NGUID: 9D1FE9D97B1B475BB7BF7F38F211AC1F 00:15:11.894 UUID: 9d1fe9d9-7b1b-475b-b7bf-7f38f211ac1f 00:15:11.894 Thin Provisioning: Not Supported 00:15:11.894 Per-NS Atomic Units: Yes 00:15:11.894 Atomic Boundary Size (Normal): 0 00:15:11.894 Atomic Boundary Size (PFail): 0 00:15:11.894 Atomic Boundary Offset: 0 00:15:11.894 Maximum Single Source Range Length: 65535 00:15:11.894 Maximum Copy Length: 65535 00:15:11.894 Maximum Source Range Count: 1 00:15:11.894 NGUID/EUI64 Never Reused: No 00:15:11.894 Namespace Write Protected: No 00:15:11.894 Number of LBA Formats: 1 00:15:11.894 Current LBA Format: LBA Format #00 00:15:11.894 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:11.894 00:15:11.894 20:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:12.152 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.152 [2024-07-15 20:20:50.591733] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.411 Initializing NVMe Controllers 00:15:17.411 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:17.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:17.411 Initialization complete. Launching workers. 00:15:17.411 ======================================================== 00:15:17.411 Latency(us) 00:15:17.411 Device Information : IOPS MiB/s Average min max 00:15:17.411 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34601.60 135.16 3700.09 1163.40 7320.02 00:15:17.411 ======================================================== 00:15:17.411 Total : 34601.60 135.16 3700.09 1163.40 7320.02 00:15:17.411 00:15:17.411 [2024-07-15 20:20:55.615000] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.411 20:20:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:17.411 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.411 [2024-07-15 20:20:55.849174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.692 Initializing NVMe Controllers 00:15:22.692 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:22.692 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:22.692 Initialization complete. Launching workers. 00:15:22.692 ======================================================== 00:15:22.692 Latency(us) 00:15:22.692 Device Information : IOPS MiB/s Average min max 00:15:22.692 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15948.35 62.30 8031.18 5983.37 15976.10 00:15:22.692 ======================================================== 00:15:22.692 Total : 15948.35 62.30 8031.18 5983.37 15976.10 00:15:22.692 00:15:22.692 [2024-07-15 20:21:00.884758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:22.692 20:21:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:22.692 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.692 [2024-07-15 20:21:01.090815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.952 [2024-07-15 20:21:06.161259] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.952 Initializing NVMe Controllers 00:15:27.952 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:27.952 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:27.952 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:27.952 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:27.952 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:27.952 Initialization complete. Launching workers. 00:15:27.952 Starting thread on core 2 00:15:27.952 Starting thread on core 3 00:15:27.952 Starting thread on core 1 00:15:27.952 20:21:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:27.952 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.952 [2024-07-15 20:21:06.471404] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:31.243 [2024-07-15 20:21:09.543341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:31.243 Initializing NVMe Controllers 00:15:31.243 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.243 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.243 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:31.243 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:31.243 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:31.243 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:31.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:31.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:31.243 Initialization complete. Launching workers. 00:15:31.243 Starting thread on core 1 with urgent priority queue 00:15:31.243 Starting thread on core 2 with urgent priority queue 00:15:31.243 Starting thread on core 3 with urgent priority queue 00:15:31.243 Starting thread on core 0 with urgent priority queue 00:15:31.243 SPDK bdev Controller (SPDK1 ) core 0: 5369.00 IO/s 18.63 secs/100000 ios 00:15:31.243 SPDK bdev Controller (SPDK1 ) core 1: 5485.67 IO/s 18.23 secs/100000 ios 00:15:31.243 SPDK bdev Controller (SPDK1 ) core 2: 5827.00 IO/s 17.16 secs/100000 ios 00:15:31.243 SPDK bdev Controller (SPDK1 ) core 3: 5850.67 IO/s 17.09 secs/100000 ios 00:15:31.243 ======================================================== 00:15:31.243 00:15:31.243 20:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:31.243 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.501 [2024-07-15 20:21:09.833444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:31.501 Initializing NVMe Controllers 00:15:31.501 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.501 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.501 Namespace ID: 1 size: 0GB 00:15:31.501 Initialization complete. 00:15:31.501 INFO: using host memory buffer for IO 00:15:31.501 Hello world! 00:15:31.501 [2024-07-15 20:21:09.868067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:31.501 20:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:31.501 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.758 [2024-07-15 20:21:10.157397] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:32.749 Initializing NVMe Controllers 00:15:32.749 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:32.749 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:32.749 Initialization complete. Launching workers. 00:15:32.749 submit (in ns) avg, min, max = 7834.1, 3512.2, 4019052.2 00:15:32.749 complete (in ns) avg, min, max = 24993.4, 2058.9, 4014995.6 00:15:32.749 00:15:32.749 Submit histogram 00:15:32.749 ================ 00:15:32.749 Range in us Cumulative Count 00:15:32.749 3.508 - 3.532: 0.2605% ( 35) 00:15:32.749 3.532 - 3.556: 0.9379% ( 91) 00:15:32.749 3.556 - 3.579: 2.7468% ( 243) 00:15:32.749 3.579 - 3.603: 6.8706% ( 554) 00:15:32.749 3.603 - 3.627: 13.2351% ( 855) 00:15:32.749 3.627 - 3.650: 22.0560% ( 1185) 00:15:32.749 3.650 - 3.674: 31.7180% ( 1298) 00:15:32.749 3.674 - 3.698: 40.1965% ( 1139) 00:15:32.749 3.698 - 3.721: 47.6850% ( 1006) 00:15:32.749 3.721 - 3.745: 52.5458% ( 653) 00:15:32.749 3.745 - 3.769: 56.7143% ( 560) 00:15:32.749 3.769 - 3.793: 60.4660% ( 504) 00:15:32.749 3.793 - 3.816: 64.2847% ( 513) 00:15:32.749 3.816 - 3.840: 67.7832% ( 470) 00:15:32.749 3.840 - 3.864: 71.8848% ( 551) 00:15:32.749 3.864 - 3.887: 75.8225% ( 529) 00:15:32.749 3.887 - 3.911: 80.0506% ( 568) 00:15:32.749 3.911 - 3.935: 83.2515% ( 430) 00:15:32.749 3.935 - 3.959: 85.7005% ( 329) 00:15:32.749 3.959 - 3.982: 87.6061% ( 256) 00:15:32.749 3.982 - 4.006: 89.1916% ( 213) 00:15:32.749 4.006 - 4.030: 90.3975% ( 162) 00:15:32.749 4.030 - 4.053: 91.5587% ( 156) 00:15:32.749 4.053 - 4.077: 92.5562% ( 134) 00:15:32.749 4.077 - 4.101: 93.6281% ( 144) 00:15:32.749 4.101 - 4.124: 94.4246% ( 107) 00:15:32.749 4.124 - 4.148: 95.0648% ( 86) 00:15:32.749 4.148 - 4.172: 95.5337% ( 63) 00:15:32.749 4.172 - 4.196: 95.9431% ( 55) 00:15:32.749 4.196 - 4.219: 96.2260% ( 38) 00:15:32.749 4.219 - 4.243: 96.4344% ( 28) 00:15:32.749 4.243 - 4.267: 96.5535% ( 16) 00:15:32.749 4.267 - 4.290: 96.7024% ( 20) 00:15:32.749 4.290 - 4.314: 96.8289% ( 17) 00:15:32.749 4.314 - 4.338: 96.9406% ( 15) 00:15:32.749 4.338 - 4.361: 97.0225% ( 11) 00:15:32.749 4.361 - 4.385: 97.0969% ( 10) 00:15:32.749 4.385 - 4.409: 97.1714% ( 10) 00:15:32.749 4.409 - 4.433: 97.2160% ( 6) 00:15:32.749 4.433 - 4.456: 97.2830% ( 9) 00:15:32.749 4.456 - 4.480: 97.3202% ( 5) 00:15:32.749 4.480 - 4.504: 97.3723% ( 7) 00:15:32.749 4.504 - 4.527: 97.3947% ( 3) 00:15:32.749 4.527 - 4.551: 97.4096% ( 2) 00:15:32.749 4.575 - 4.599: 97.4393% ( 4) 00:15:32.749 4.599 - 4.622: 97.4542% ( 2) 00:15:32.749 4.622 - 4.646: 97.4766% ( 3) 00:15:32.749 4.646 - 4.670: 97.5063% ( 4) 00:15:32.749 4.670 - 4.693: 97.5138% ( 1) 00:15:32.749 4.693 - 4.717: 97.5659% ( 7) 00:15:32.749 4.717 - 4.741: 97.6105% ( 6) 00:15:32.749 4.741 - 4.764: 97.7148% ( 14) 00:15:32.749 4.764 - 4.788: 97.7966% ( 11) 00:15:32.749 4.788 - 4.812: 97.8339% ( 5) 00:15:32.749 4.812 - 4.836: 97.8487% ( 2) 00:15:32.749 4.836 - 4.859: 97.8934% ( 6) 00:15:32.749 4.883 - 4.907: 97.9455% ( 7) 00:15:32.749 4.907 - 4.930: 97.9827% ( 5) 00:15:32.749 4.930 - 4.954: 98.0125% ( 4) 00:15:32.749 4.978 - 5.001: 98.0274% ( 2) 00:15:32.749 5.001 - 5.025: 98.0721% ( 6) 00:15:32.749 5.025 - 5.049: 98.1167% ( 6) 00:15:32.749 5.049 - 5.073: 98.1316% ( 2) 00:15:32.749 5.073 - 5.096: 98.1465% ( 2) 00:15:32.749 5.096 - 5.120: 98.1539% ( 1) 00:15:32.749 5.120 - 5.144: 98.1614% ( 1) 00:15:32.749 5.144 - 5.167: 98.1763% ( 2) 00:15:32.749 5.167 - 5.191: 98.1837% ( 1) 00:15:32.749 5.191 - 5.215: 98.1912% ( 1) 00:15:32.749 5.215 - 5.239: 98.1986% ( 1) 00:15:32.749 5.262 - 5.286: 98.2135% ( 2) 00:15:32.749 5.286 - 5.310: 98.2209% ( 1) 00:15:32.749 5.476 - 5.499: 98.2284% ( 1) 00:15:32.749 5.665 - 5.689: 98.2358% ( 1) 00:15:32.749 5.760 - 5.784: 98.2433% ( 1) 00:15:32.749 5.807 - 5.831: 98.2507% ( 1) 00:15:32.749 5.831 - 5.855: 98.2582% ( 1) 00:15:32.749 5.855 - 5.879: 98.2656% ( 1) 00:15:32.749 5.879 - 5.902: 98.2730% ( 1) 00:15:32.749 5.902 - 5.926: 98.2805% ( 1) 00:15:32.749 5.926 - 5.950: 98.2954% ( 2) 00:15:32.749 5.997 - 6.021: 98.3028% ( 1) 00:15:32.749 6.044 - 6.068: 98.3103% ( 1) 00:15:32.749 6.068 - 6.116: 98.3251% ( 2) 00:15:32.749 6.210 - 6.258: 98.3326% ( 1) 00:15:32.749 6.305 - 6.353: 98.3400% ( 1) 00:15:32.749 6.353 - 6.400: 98.3475% ( 1) 00:15:32.749 6.495 - 6.542: 98.3549% ( 1) 00:15:32.749 6.542 - 6.590: 98.3624% ( 1) 00:15:32.749 6.684 - 6.732: 98.3698% ( 1) 00:15:32.749 6.874 - 6.921: 98.3847% ( 2) 00:15:32.749 6.921 - 6.969: 98.3921% ( 1) 00:15:32.749 7.016 - 7.064: 98.4219% ( 4) 00:15:32.749 7.064 - 7.111: 98.4368% ( 2) 00:15:32.749 7.159 - 7.206: 98.4442% ( 1) 00:15:32.749 7.253 - 7.301: 98.4517% ( 1) 00:15:32.749 7.348 - 7.396: 98.4740% ( 3) 00:15:32.749 7.396 - 7.443: 98.4889% ( 2) 00:15:32.749 7.443 - 7.490: 98.5038% ( 2) 00:15:32.749 7.538 - 7.585: 98.5112% ( 1) 00:15:32.749 7.633 - 7.680: 98.5261% ( 2) 00:15:32.749 7.680 - 7.727: 98.5336% ( 1) 00:15:32.749 7.822 - 7.870: 98.5559% ( 3) 00:15:32.749 7.870 - 7.917: 98.5857% ( 4) 00:15:32.749 7.917 - 7.964: 98.6080% ( 3) 00:15:32.749 8.059 - 8.107: 98.6303% ( 3) 00:15:32.749 8.154 - 8.201: 98.6378% ( 1) 00:15:32.749 8.201 - 8.249: 98.6452% ( 1) 00:15:32.749 8.249 - 8.296: 98.6527% ( 1) 00:15:32.749 8.439 - 8.486: 98.6676% ( 2) 00:15:32.749 8.581 - 8.628: 98.6824% ( 2) 00:15:32.749 8.818 - 8.865: 98.6973% ( 2) 00:15:32.749 8.865 - 8.913: 98.7048% ( 1) 00:15:32.749 8.913 - 8.960: 98.7197% ( 2) 00:15:32.749 9.102 - 9.150: 98.7271% ( 1) 00:15:32.750 9.244 - 9.292: 98.7346% ( 1) 00:15:32.750 9.387 - 9.434: 98.7420% ( 1) 00:15:32.750 9.671 - 9.719: 98.7569% ( 2) 00:15:32.750 9.813 - 9.861: 98.7643% ( 1) 00:15:32.750 10.240 - 10.287: 98.7718% ( 1) 00:15:32.750 10.382 - 10.430: 98.7792% ( 1) 00:15:32.750 10.714 - 10.761: 98.7867% ( 1) 00:15:32.750 11.046 - 11.093: 98.7941% ( 1) 00:15:32.750 11.188 - 11.236: 98.8015% ( 1) 00:15:32.750 11.236 - 11.283: 98.8090% ( 1) 00:15:32.750 11.378 - 11.425: 98.8164% ( 1) 00:15:32.750 11.520 - 11.567: 98.8239% ( 1) 00:15:32.750 11.567 - 11.615: 98.8313% ( 1) 00:15:32.750 11.710 - 11.757: 98.8388% ( 1) 00:15:32.750 11.757 - 11.804: 98.8537% ( 2) 00:15:32.750 11.852 - 11.899: 98.8611% ( 1) 00:15:32.750 11.994 - 12.041: 98.8685% ( 1) 00:15:32.750 12.231 - 12.326: 98.8760% ( 1) 00:15:32.750 12.326 - 12.421: 98.8834% ( 1) 00:15:32.750 12.421 - 12.516: 98.9058% ( 3) 00:15:32.750 12.705 - 12.800: 98.9132% ( 1) 00:15:32.750 13.274 - 13.369: 98.9206% ( 1) 00:15:32.750 14.033 - 14.127: 98.9355% ( 2) 00:15:32.750 14.222 - 14.317: 98.9430% ( 1) 00:15:32.750 14.886 - 14.981: 98.9504% ( 1) 00:15:32.750 15.170 - 15.265: 98.9579% ( 1) 00:15:32.750 16.024 - 16.119: 98.9653% ( 1) 00:15:32.750 16.877 - 16.972: 98.9728% ( 1) 00:15:32.750 17.067 - 17.161: 98.9876% ( 2) 00:15:32.750 17.161 - 17.256: 99.0174% ( 4) 00:15:32.750 17.256 - 17.351: 99.0249% ( 1) 00:15:32.750 17.351 - 17.446: 99.0397% ( 2) 00:15:32.750 17.446 - 17.541: 99.0472% ( 1) 00:15:32.750 17.541 - 17.636: 99.0993% ( 7) 00:15:32.750 17.636 - 17.730: 99.1589% ( 8) 00:15:32.750 17.730 - 17.825: 99.1812% ( 3) 00:15:32.750 17.825 - 17.920: 99.2035% ( 3) 00:15:32.750 17.920 - 18.015: 99.2333% ( 4) 00:15:32.750 18.015 - 18.110: 99.3301% ( 13) 00:15:32.750 18.110 - 18.204: 99.3822% ( 7) 00:15:32.750 18.204 - 18.299: 99.4343% ( 7) 00:15:32.750 18.299 - 18.394: 99.5013% ( 9) 00:15:32.750 18.394 - 18.489: 99.5906% ( 12) 00:15:32.750 18.489 - 18.584: 99.6353% ( 6) 00:15:32.750 18.584 - 18.679: 99.6650% ( 4) 00:15:32.750 18.679 - 18.773: 99.7022% ( 5) 00:15:32.750 18.773 - 18.868: 99.7469% ( 6) 00:15:32.750 18.868 - 18.963: 99.7618% ( 2) 00:15:32.750 19.058 - 19.153: 99.7692% ( 1) 00:15:32.750 19.153 - 19.247: 99.7916% ( 3) 00:15:32.750 19.247 - 19.342: 99.8065% ( 2) 00:15:32.750 19.532 - 19.627: 99.8139% ( 1) 00:15:32.750 20.006 - 20.101: 99.8213% ( 1) 00:15:32.750 20.196 - 20.290: 99.8362% ( 2) 00:15:32.750 21.902 - 21.997: 99.8437% ( 1) 00:15:32.750 25.410 - 25.600: 99.8511% ( 1) 00:15:32.750 25.979 - 26.169: 99.8586% ( 1) 00:15:32.750 26.738 - 26.927: 99.8660% ( 1) 00:15:32.750 26.927 - 27.117: 99.8735% ( 1) 00:15:32.750 27.496 - 27.686: 99.8809% ( 1) 00:15:32.750 27.876 - 28.065: 99.8883% ( 1) 00:15:32.750 28.255 - 28.444: 99.8958% ( 1) 00:15:32.750 30.341 - 30.530: 99.9032% ( 1) 00:15:32.750 3980.705 - 4004.978: 99.9628% ( 8) 00:15:32.750 4004.978 - 4029.250: 100.0000% ( 5) 00:15:32.750 00:15:32.750 Complete histogram 00:15:32.750 ================== 00:15:32.750 Range in us Cumulative Count 00:15:32.750 2.050 - 2.062: 0.2084% ( 28) 00:15:32.750 2.062 - 2.074: 22.9939% ( 3061) 00:15:32.750 2.074 - 2.086: 38.1792% ( 2040) 00:15:32.750 2.086 - 2.098: 41.6406% ( 465) 00:15:32.750 2.098 - 2.110: 56.2826% ( 1967) 00:15:32.750 2.110 - 2.121: 60.3990% ( 553) 00:15:32.750 2.121 - 2.133: 63.0639% ( 358) 00:15:32.750 2.133 - 2.145: 71.7657% ( 1169) 00:15:32.750 2.145 - 2.157: 74.4231% ( 357) 00:15:32.750 2.157 - 2.169: 76.5446% ( 285) 00:15:32.750 2.169 - 2.181: 80.2293% ( 495) 00:15:32.750 2.181 - 2.193: 81.3682% ( 153) 00:15:32.750 2.193 - 2.204: 82.4699% ( 148) 00:15:32.750 2.204 - 2.216: 86.5491% ( 548) 00:15:32.750 2.216 - 2.228: 89.1842% ( 354) 00:15:32.750 2.228 - 2.240: 90.8367% ( 222) 00:15:32.750 2.240 - 2.252: 93.0028% ( 291) 00:15:32.750 2.252 - 2.264: 93.7323% ( 98) 00:15:32.750 2.264 - 2.276: 94.1343% ( 54) 00:15:32.750 2.276 - 2.287: 94.4990% ( 49) 00:15:32.750 2.287 - 2.299: 95.0722% ( 77) 00:15:32.750 2.299 - 2.311: 95.5709% ( 67) 00:15:32.750 2.311 - 2.323: 95.7198% ( 20) 00:15:32.750 2.323 - 2.335: 95.8464% ( 17) 00:15:32.750 2.335 - 2.347: 95.9282% ( 11) 00:15:32.750 2.347 - 2.359: 96.1739% ( 33) 00:15:32.750 2.359 - 2.370: 96.5461% ( 50) 00:15:32.750 2.370 - 2.382: 96.9927% ( 60) 00:15:32.750 2.382 - 2.394: 97.4021% ( 55) 00:15:32.750 2.394 - 2.406: 97.5659% ( 22) 00:15:32.750 2.406 - 2.418: 97.6552% ( 12) 00:15:32.750 2.418 - 2.430: 97.7445% ( 12) 00:15:32.750 2.430 - 2.441: 97.8711% ( 17) 00:15:32.750 2.441 - 2.453: 98.0274% ( 21) 00:15:32.750 2.453 - 2.465: 98.1763% ( 20) 00:15:32.750 2.465 - 2.477: 98.2284% ( 7) 00:15:32.750 2.477 - 2.489: 98.2730% ( 6) 00:15:32.750 2.489 - 2.501: 98.3177% ( 6) 00:15:32.750 2.501 - 2.513: 98.3475% ( 4) 00:15:32.750 2.513 - 2.524: 98.3549% ( 1) 00:15:32.750 2.524 - 2.536: 98.3773% ( 3) 00:15:32.750 2.536 - 2.548: 98.3847% ( 1) 00:15:32.750 2.548 - 2.560: 98.4070% ( 3) 00:15:32.750 2.619 - 2.631: 98.4145% ( 1) 00:15:32.750 2.631 - 2.643: 98.4219% ( 1) 00:15:32.750 2.643 - 2.655: 98.4368% ( 2) 00:15:32.750 2.726 - 2.738: 98.4442% ( 1) 00:15:32.750 3.058 - 3.081: 98.4517% ( 1) 00:15:32.750 3.129 - 3.153: 98.4591% ( 1) 00:15:32.750 3.176 - 3.200: 98.4666% ( 1) 00:15:32.750 3.200 - 3.224: 98.4815% ( 2) 00:15:32.750 3.224 - 3.247: 98.4889% ( 1) 00:15:32.750 3.247 - 3.271: 98.4964% ( 1) 00:15:32.750 3.271 - 3.295: 98.5112% ( 2) 00:15:32.750 3.295 - 3.319: 98.5261% ( 2) 00:15:32.750 3.319 - 3.342: 98.5336% ( 1) 00:15:32.750 3.342 - 3.366: 98.5485% ( 2) 00:15:32.750 3.390 - 3.413: 98.5559% ( 1) 00:15:32.750 3.437 - 3.461: 98.5633% ( 1) 00:15:32.750 3.461 - 3.484: 98.5782% ( 2) 00:15:32.750 3.484 - 3.508: 98.5857% ( 1) 00:15:32.750 3.508 - 3.532: 98.5931% ( 1) 00:15:32.750 3.556 - 3.579: 98.6080% ( 2) 00:15:32.750 3.579 - 3.603: 98.6229% ( 2) 00:15:32.750 3.627 - 3.650: 98.6527% ( 4) 00:15:32.750 3.650 - 3.674: 98.6601% ( 1) 00:15:32.750 3.745 - 3.769: 98.6676% ( 1) 00:15:32.750 3.816 - 3.840: 98.6750% ( 1) 00:15:32.750 3.840 - 3.864: 98.6824% ( 1) 00:15:32.750 5.049 - 5.073: 98.6899% ( 1) 00:15:32.750 5.167 - 5.191: 98.6973% ( 1) 00:15:32.750 5.310 - 5.333: 9[2024-07-15 20:21:11.182376] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:32.750 8.7048% ( 1) 00:15:32.750 5.381 - 5.404: 98.7122% ( 1) 00:15:32.750 5.594 - 5.618: 98.7271% ( 2) 00:15:32.750 5.855 - 5.879: 98.7346% ( 1) 00:15:32.750 6.021 - 6.044: 98.7420% ( 1) 00:15:32.750 6.068 - 6.116: 98.7494% ( 1) 00:15:32.750 6.163 - 6.210: 98.7569% ( 1) 00:15:32.750 6.210 - 6.258: 98.7643% ( 1) 00:15:32.750 6.258 - 6.305: 98.7718% ( 1) 00:15:32.750 6.305 - 6.353: 98.7792% ( 1) 00:15:32.750 6.353 - 6.400: 98.7867% ( 1) 00:15:32.750 6.684 - 6.732: 98.7941% ( 1) 00:15:32.750 6.732 - 6.779: 98.8015% ( 1) 00:15:32.750 6.969 - 7.016: 98.8090% ( 1) 00:15:32.750 7.396 - 7.443: 98.8164% ( 1) 00:15:32.750 7.443 - 7.490: 98.8239% ( 1) 00:15:32.750 9.102 - 9.150: 98.8313% ( 1) 00:15:32.750 15.455 - 15.550: 98.8388% ( 1) 00:15:32.750 15.550 - 15.644: 98.8537% ( 2) 00:15:32.750 15.644 - 15.739: 98.8760% ( 3) 00:15:32.750 15.739 - 15.834: 98.8909% ( 2) 00:15:32.750 15.834 - 15.929: 98.9206% ( 4) 00:15:32.750 15.929 - 16.024: 98.9504% ( 4) 00:15:32.750 16.024 - 16.119: 98.9579% ( 1) 00:15:32.750 16.119 - 16.213: 98.9876% ( 4) 00:15:32.750 16.213 - 16.308: 99.0323% ( 6) 00:15:32.750 16.308 - 16.403: 99.0546% ( 3) 00:15:32.750 16.403 - 16.498: 99.1067% ( 7) 00:15:32.750 16.498 - 16.593: 99.1440% ( 5) 00:15:32.750 16.593 - 16.687: 99.2035% ( 8) 00:15:32.750 16.687 - 16.782: 99.2631% ( 8) 00:15:32.750 16.782 - 16.877: 99.2928% ( 4) 00:15:32.750 16.877 - 16.972: 99.3077% ( 2) 00:15:32.750 16.972 - 17.067: 99.3301% ( 3) 00:15:32.750 17.067 - 17.161: 99.3449% ( 2) 00:15:32.750 17.161 - 17.256: 99.3524% ( 1) 00:15:32.750 17.256 - 17.351: 99.3673% ( 2) 00:15:32.750 17.351 - 17.446: 99.4045% ( 5) 00:15:32.750 17.541 - 17.636: 99.4119% ( 1) 00:15:32.750 17.825 - 17.920: 99.4194% ( 1) 00:15:32.750 22.376 - 22.471: 99.4268% ( 1) 00:15:32.750 2014.625 - 2026.761: 99.4343% ( 1) 00:15:32.750 3980.705 - 4004.978: 99.9330% ( 67) 00:15:32.750 4004.978 - 4029.250: 100.0000% ( 9) 00:15:32.750 00:15:32.750 20:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:32.750 20:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:32.750 20:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:32.750 20:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:32.750 20:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.009 [ 00:15:33.009 { 00:15:33.009 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.009 "subtype": "Discovery", 00:15:33.009 "listen_addresses": [], 00:15:33.009 "allow_any_host": true, 00:15:33.009 "hosts": [] 00:15:33.009 }, 00:15:33.009 { 00:15:33.009 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:33.009 "subtype": "NVMe", 00:15:33.009 "listen_addresses": [ 00:15:33.009 { 00:15:33.009 "trtype": "VFIOUSER", 00:15:33.009 "adrfam": "IPv4", 00:15:33.009 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:33.009 "trsvcid": "0" 00:15:33.009 } 00:15:33.009 ], 00:15:33.009 "allow_any_host": true, 00:15:33.009 "hosts": [], 00:15:33.009 "serial_number": "SPDK1", 00:15:33.009 "model_number": "SPDK bdev Controller", 00:15:33.009 "max_namespaces": 32, 00:15:33.009 "min_cntlid": 1, 00:15:33.009 "max_cntlid": 65519, 00:15:33.009 "namespaces": [ 00:15:33.009 { 00:15:33.009 "nsid": 1, 00:15:33.009 "bdev_name": "Malloc1", 00:15:33.009 "name": "Malloc1", 00:15:33.009 "nguid": "9D1FE9D97B1B475BB7BF7F38F211AC1F", 00:15:33.009 "uuid": "9d1fe9d9-7b1b-475b-b7bf-7f38f211ac1f" 00:15:33.009 } 00:15:33.009 ] 00:15:33.009 }, 00:15:33.009 { 00:15:33.009 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:33.009 "subtype": "NVMe", 00:15:33.009 "listen_addresses": [ 00:15:33.009 { 00:15:33.009 "trtype": "VFIOUSER", 00:15:33.009 "adrfam": "IPv4", 00:15:33.009 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:33.009 "trsvcid": "0" 00:15:33.009 } 00:15:33.009 ], 00:15:33.009 "allow_any_host": true, 00:15:33.009 "hosts": [], 00:15:33.009 "serial_number": "SPDK2", 00:15:33.009 "model_number": "SPDK bdev Controller", 00:15:33.009 "max_namespaces": 32, 00:15:33.009 "min_cntlid": 1, 00:15:33.009 "max_cntlid": 65519, 00:15:33.009 "namespaces": [ 00:15:33.009 { 00:15:33.009 "nsid": 1, 00:15:33.009 "bdev_name": "Malloc2", 00:15:33.009 "name": "Malloc2", 00:15:33.009 "nguid": "7F41D0989EE04CF58EDD796C2383C18D", 00:15:33.009 "uuid": "7f41d098-9ee0-4cf5-8edd-796c2383c18d" 00:15:33.009 } 00:15:33.009 ] 00:15:33.009 } 00:15:33.009 ] 00:15:33.009 20:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:33.009 20:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4016105 00:15:33.009 20:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:33.009 20:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:33.009 20:21:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:33.009 20:21:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.009 20:21:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.009 20:21:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:33.009 20:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:33.009 20:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:33.268 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.268 [2024-07-15 20:21:11.682379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:33.528 Malloc3 00:15:33.528 20:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:33.528 [2024-07-15 20:21:12.042012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:33.528 20:21:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.786 Asynchronous Event Request test 00:15:33.786 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.786 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.786 Registering asynchronous event callbacks... 00:15:33.786 Starting namespace attribute notice tests for all controllers... 00:15:33.787 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:33.787 aer_cb - Changed Namespace 00:15:33.787 Cleaning up... 00:15:33.787 [ 00:15:33.787 { 00:15:33.787 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.787 "subtype": "Discovery", 00:15:33.787 "listen_addresses": [], 00:15:33.787 "allow_any_host": true, 00:15:33.787 "hosts": [] 00:15:33.787 }, 00:15:33.787 { 00:15:33.787 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:33.787 "subtype": "NVMe", 00:15:33.787 "listen_addresses": [ 00:15:33.787 { 00:15:33.787 "trtype": "VFIOUSER", 00:15:33.787 "adrfam": "IPv4", 00:15:33.787 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:33.787 "trsvcid": "0" 00:15:33.787 } 00:15:33.787 ], 00:15:33.787 "allow_any_host": true, 00:15:33.787 "hosts": [], 00:15:33.787 "serial_number": "SPDK1", 00:15:33.787 "model_number": "SPDK bdev Controller", 00:15:33.787 "max_namespaces": 32, 00:15:33.787 "min_cntlid": 1, 00:15:33.787 "max_cntlid": 65519, 00:15:33.787 "namespaces": [ 00:15:33.787 { 00:15:33.787 "nsid": 1, 00:15:33.787 "bdev_name": "Malloc1", 00:15:33.787 "name": "Malloc1", 00:15:33.787 "nguid": "9D1FE9D97B1B475BB7BF7F38F211AC1F", 00:15:33.787 "uuid": "9d1fe9d9-7b1b-475b-b7bf-7f38f211ac1f" 00:15:33.787 }, 00:15:33.787 { 00:15:33.787 "nsid": 2, 00:15:33.787 "bdev_name": "Malloc3", 00:15:33.787 "name": "Malloc3", 00:15:33.787 "nguid": "C22F8FE2FC9F437DADE68F3EC6DBFE68", 00:15:33.787 "uuid": "c22f8fe2-fc9f-437d-ade6-8f3ec6dbfe68" 00:15:33.787 } 00:15:33.787 ] 00:15:33.787 }, 00:15:33.787 { 00:15:33.787 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:33.787 "subtype": "NVMe", 00:15:33.787 "listen_addresses": [ 00:15:33.787 { 00:15:33.787 "trtype": "VFIOUSER", 00:15:33.787 "adrfam": "IPv4", 00:15:33.787 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:33.787 "trsvcid": "0" 00:15:33.787 } 00:15:33.787 ], 00:15:33.787 "allow_any_host": true, 00:15:33.787 "hosts": [], 00:15:33.787 "serial_number": "SPDK2", 00:15:33.787 "model_number": "SPDK bdev Controller", 00:15:33.787 "max_namespaces": 32, 00:15:33.787 "min_cntlid": 1, 00:15:33.787 "max_cntlid": 65519, 00:15:33.787 "namespaces": [ 00:15:33.787 { 00:15:33.787 "nsid": 1, 00:15:33.787 "bdev_name": "Malloc2", 00:15:33.787 "name": "Malloc2", 00:15:33.787 "nguid": "7F41D0989EE04CF58EDD796C2383C18D", 00:15:33.787 "uuid": "7f41d098-9ee0-4cf5-8edd-796c2383c18d" 00:15:33.787 } 00:15:33.787 ] 00:15:33.787 } 00:15:33.787 ] 00:15:33.787 20:21:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4016105 00:15:33.787 20:21:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:33.787 20:21:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:33.787 20:21:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:33.787 20:21:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:34.047 [2024-07-15 20:21:12.322137] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:15:34.047 [2024-07-15 20:21:12.322180] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4016203 ] 00:15:34.047 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.047 [2024-07-15 20:21:12.357059] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:34.047 [2024-07-15 20:21:12.363197] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:34.047 [2024-07-15 20:21:12.363234] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2552faa000 00:15:34.047 [2024-07-15 20:21:12.364182] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.047 [2024-07-15 20:21:12.365203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.047 [2024-07-15 20:21:12.366216] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.047 [2024-07-15 20:21:12.367219] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:34.047 [2024-07-15 20:21:12.368231] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:34.047 [2024-07-15 20:21:12.369232] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.047 [2024-07-15 20:21:12.370236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:34.047 [2024-07-15 20:21:12.371256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.047 [2024-07-15 20:21:12.372251] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:34.047 [2024-07-15 20:21:12.372272] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2551d5e000 00:15:34.047 [2024-07-15 20:21:12.373389] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:34.047 [2024-07-15 20:21:12.387605] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:34.047 [2024-07-15 20:21:12.387638] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:34.047 [2024-07-15 20:21:12.392738] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:34.047 [2024-07-15 20:21:12.392787] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:34.047 [2024-07-15 20:21:12.392867] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:34.047 [2024-07-15 20:21:12.392914] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:34.047 [2024-07-15 20:21:12.392926] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:34.047 [2024-07-15 20:21:12.393738] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:34.047 [2024-07-15 20:21:12.393757] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:34.048 [2024-07-15 20:21:12.393769] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:34.048 [2024-07-15 20:21:12.394740] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:34.048 [2024-07-15 20:21:12.394759] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:34.048 [2024-07-15 20:21:12.394772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:34.048 [2024-07-15 20:21:12.395743] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:34.048 [2024-07-15 20:21:12.395766] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:34.048 [2024-07-15 20:21:12.396756] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:34.048 [2024-07-15 20:21:12.396775] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:34.048 [2024-07-15 20:21:12.396784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:34.048 [2024-07-15 20:21:12.396796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:34.048 [2024-07-15 20:21:12.396905] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:34.048 [2024-07-15 20:21:12.396914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:34.048 [2024-07-15 20:21:12.396922] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:34.048 [2024-07-15 20:21:12.397760] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:34.048 [2024-07-15 20:21:12.398761] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:34.048 [2024-07-15 20:21:12.399766] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:34.048 [2024-07-15 20:21:12.400761] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.048 [2024-07-15 20:21:12.400829] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:34.048 [2024-07-15 20:21:12.401783] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:34.048 [2024-07-15 20:21:12.401803] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:34.048 [2024-07-15 20:21:12.401812] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.401835] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:34.048 [2024-07-15 20:21:12.401851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.401873] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:34.048 [2024-07-15 20:21:12.401904] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:34.048 [2024-07-15 20:21:12.401923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:34.048 [2024-07-15 20:21:12.409903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:34.048 [2024-07-15 20:21:12.409925] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:34.048 [2024-07-15 20:21:12.409938] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:34.048 [2024-07-15 20:21:12.409947] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:34.048 [2024-07-15 20:21:12.409958] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:34.048 [2024-07-15 20:21:12.409966] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:34.048 [2024-07-15 20:21:12.409974] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:34.048 [2024-07-15 20:21:12.409982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.409996] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.410012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:34.048 [2024-07-15 20:21:12.417901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:34.048 [2024-07-15 20:21:12.417929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.048 [2024-07-15 20:21:12.417943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.048 [2024-07-15 20:21:12.417956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.048 [2024-07-15 20:21:12.417967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.048 [2024-07-15 20:21:12.417976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.417992] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.418007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:34.048 [2024-07-15 20:21:12.425903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:34.048 [2024-07-15 20:21:12.425921] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:34.048 [2024-07-15 20:21:12.425930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.425942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.425952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.425966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:34.048 [2024-07-15 20:21:12.433887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:34.048 [2024-07-15 20:21:12.433956] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.433972] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.433984] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:34.048 [2024-07-15 20:21:12.433993] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:34.048 [2024-07-15 20:21:12.434007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:34.048 [2024-07-15 20:21:12.441885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:34.048 [2024-07-15 20:21:12.441907] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:34.048 [2024-07-15 20:21:12.441922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.441937] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.441950] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:34.048 [2024-07-15 20:21:12.441959] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:34.048 [2024-07-15 20:21:12.441969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:34.048 [2024-07-15 20:21:12.449885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:34.048 [2024-07-15 20:21:12.449913] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.449929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.449942] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:34.048 [2024-07-15 20:21:12.449950] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:34.048 [2024-07-15 20:21:12.449960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:34.048 [2024-07-15 20:21:12.457901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:34.048 [2024-07-15 20:21:12.457922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.457934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.457948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.457960] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.457968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.457976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.457985] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:34.048 [2024-07-15 20:21:12.457993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:34.048 [2024-07-15 20:21:12.458002] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:34.049 [2024-07-15 20:21:12.458026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:34.049 [2024-07-15 20:21:12.465902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:34.049 [2024-07-15 20:21:12.465928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:34.049 [2024-07-15 20:21:12.473899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:34.049 [2024-07-15 20:21:12.473925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:34.049 [2024-07-15 20:21:12.479515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:34.049 [2024-07-15 20:21:12.479539] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:34.049 [2024-07-15 20:21:12.488903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:34.049 [2024-07-15 20:21:12.488935] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:34.049 [2024-07-15 20:21:12.488947] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:34.049 [2024-07-15 20:21:12.488953] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:34.049 [2024-07-15 20:21:12.488960] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:34.049 [2024-07-15 20:21:12.488970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:34.049 [2024-07-15 20:21:12.488982] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:34.049 [2024-07-15 20:21:12.488990] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:34.049 [2024-07-15 20:21:12.489000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:34.049 [2024-07-15 20:21:12.489011] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:34.049 [2024-07-15 20:21:12.489020] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:34.049 [2024-07-15 20:21:12.489029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:34.049 [2024-07-15 20:21:12.489042] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:34.049 [2024-07-15 20:21:12.489050] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:34.049 [2024-07-15 20:21:12.489059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:34.049 [2024-07-15 20:21:12.496900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:34.049 [2024-07-15 20:21:12.496928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:34.049 [2024-07-15 20:21:12.496945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:34.049 [2024-07-15 20:21:12.496957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:34.049 ===================================================== 00:15:34.049 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.049 ===================================================== 00:15:34.049 Controller Capabilities/Features 00:15:34.049 ================================ 00:15:34.049 Vendor ID: 4e58 00:15:34.049 Subsystem Vendor ID: 4e58 00:15:34.049 Serial Number: SPDK2 00:15:34.049 Model Number: SPDK bdev Controller 00:15:34.049 Firmware Version: 24.09 00:15:34.049 Recommended Arb Burst: 6 00:15:34.049 IEEE OUI Identifier: 8d 6b 50 00:15:34.049 Multi-path I/O 00:15:34.049 May have multiple subsystem ports: Yes 00:15:34.049 May have multiple controllers: Yes 00:15:34.049 Associated with SR-IOV VF: No 00:15:34.049 Max Data Transfer Size: 131072 00:15:34.049 Max Number of Namespaces: 32 00:15:34.049 Max Number of I/O Queues: 127 00:15:34.049 NVMe Specification Version (VS): 1.3 00:15:34.049 NVMe Specification Version (Identify): 1.3 00:15:34.049 Maximum Queue Entries: 256 00:15:34.049 Contiguous Queues Required: Yes 00:15:34.049 Arbitration Mechanisms Supported 00:15:34.049 Weighted Round Robin: Not Supported 00:15:34.049 Vendor Specific: Not Supported 00:15:34.049 Reset Timeout: 15000 ms 00:15:34.049 Doorbell Stride: 4 bytes 00:15:34.049 NVM Subsystem Reset: Not Supported 00:15:34.049 Command Sets Supported 00:15:34.049 NVM Command Set: Supported 00:15:34.049 Boot Partition: Not Supported 00:15:34.049 Memory Page Size Minimum: 4096 bytes 00:15:34.049 Memory Page Size Maximum: 4096 bytes 00:15:34.049 Persistent Memory Region: Not Supported 00:15:34.049 Optional Asynchronous Events Supported 00:15:34.049 Namespace Attribute Notices: Supported 00:15:34.049 Firmware Activation Notices: Not Supported 00:15:34.049 ANA Change Notices: Not Supported 00:15:34.049 PLE Aggregate Log Change Notices: Not Supported 00:15:34.049 LBA Status Info Alert Notices: Not Supported 00:15:34.049 EGE Aggregate Log Change Notices: Not Supported 00:15:34.049 Normal NVM Subsystem Shutdown event: Not Supported 00:15:34.049 Zone Descriptor Change Notices: Not Supported 00:15:34.049 Discovery Log Change Notices: Not Supported 00:15:34.049 Controller Attributes 00:15:34.049 128-bit Host Identifier: Supported 00:15:34.049 Non-Operational Permissive Mode: Not Supported 00:15:34.049 NVM Sets: Not Supported 00:15:34.049 Read Recovery Levels: Not Supported 00:15:34.049 Endurance Groups: Not Supported 00:15:34.049 Predictable Latency Mode: Not Supported 00:15:34.049 Traffic Based Keep ALive: Not Supported 00:15:34.049 Namespace Granularity: Not Supported 00:15:34.049 SQ Associations: Not Supported 00:15:34.049 UUID List: Not Supported 00:15:34.049 Multi-Domain Subsystem: Not Supported 00:15:34.049 Fixed Capacity Management: Not Supported 00:15:34.049 Variable Capacity Management: Not Supported 00:15:34.049 Delete Endurance Group: Not Supported 00:15:34.049 Delete NVM Set: Not Supported 00:15:34.049 Extended LBA Formats Supported: Not Supported 00:15:34.049 Flexible Data Placement Supported: Not Supported 00:15:34.049 00:15:34.049 Controller Memory Buffer Support 00:15:34.049 ================================ 00:15:34.049 Supported: No 00:15:34.049 00:15:34.049 Persistent Memory Region Support 00:15:34.049 ================================ 00:15:34.049 Supported: No 00:15:34.049 00:15:34.049 Admin Command Set Attributes 00:15:34.049 ============================ 00:15:34.049 Security Send/Receive: Not Supported 00:15:34.049 Format NVM: Not Supported 00:15:34.049 Firmware Activate/Download: Not Supported 00:15:34.049 Namespace Management: Not Supported 00:15:34.049 Device Self-Test: Not Supported 00:15:34.049 Directives: Not Supported 00:15:34.049 NVMe-MI: Not Supported 00:15:34.049 Virtualization Management: Not Supported 00:15:34.049 Doorbell Buffer Config: Not Supported 00:15:34.049 Get LBA Status Capability: Not Supported 00:15:34.049 Command & Feature Lockdown Capability: Not Supported 00:15:34.049 Abort Command Limit: 4 00:15:34.049 Async Event Request Limit: 4 00:15:34.049 Number of Firmware Slots: N/A 00:15:34.049 Firmware Slot 1 Read-Only: N/A 00:15:34.049 Firmware Activation Without Reset: N/A 00:15:34.049 Multiple Update Detection Support: N/A 00:15:34.049 Firmware Update Granularity: No Information Provided 00:15:34.049 Per-Namespace SMART Log: No 00:15:34.049 Asymmetric Namespace Access Log Page: Not Supported 00:15:34.049 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:34.049 Command Effects Log Page: Supported 00:15:34.049 Get Log Page Extended Data: Supported 00:15:34.049 Telemetry Log Pages: Not Supported 00:15:34.050 Persistent Event Log Pages: Not Supported 00:15:34.050 Supported Log Pages Log Page: May Support 00:15:34.050 Commands Supported & Effects Log Page: Not Supported 00:15:34.050 Feature Identifiers & Effects Log Page:May Support 00:15:34.050 NVMe-MI Commands & Effects Log Page: May Support 00:15:34.050 Data Area 4 for Telemetry Log: Not Supported 00:15:34.050 Error Log Page Entries Supported: 128 00:15:34.050 Keep Alive: Supported 00:15:34.050 Keep Alive Granularity: 10000 ms 00:15:34.050 00:15:34.050 NVM Command Set Attributes 00:15:34.050 ========================== 00:15:34.050 Submission Queue Entry Size 00:15:34.050 Max: 64 00:15:34.050 Min: 64 00:15:34.050 Completion Queue Entry Size 00:15:34.050 Max: 16 00:15:34.050 Min: 16 00:15:34.050 Number of Namespaces: 32 00:15:34.050 Compare Command: Supported 00:15:34.050 Write Uncorrectable Command: Not Supported 00:15:34.050 Dataset Management Command: Supported 00:15:34.050 Write Zeroes Command: Supported 00:15:34.050 Set Features Save Field: Not Supported 00:15:34.050 Reservations: Not Supported 00:15:34.050 Timestamp: Not Supported 00:15:34.050 Copy: Supported 00:15:34.050 Volatile Write Cache: Present 00:15:34.050 Atomic Write Unit (Normal): 1 00:15:34.050 Atomic Write Unit (PFail): 1 00:15:34.050 Atomic Compare & Write Unit: 1 00:15:34.050 Fused Compare & Write: Supported 00:15:34.050 Scatter-Gather List 00:15:34.050 SGL Command Set: Supported (Dword aligned) 00:15:34.050 SGL Keyed: Not Supported 00:15:34.050 SGL Bit Bucket Descriptor: Not Supported 00:15:34.050 SGL Metadata Pointer: Not Supported 00:15:34.050 Oversized SGL: Not Supported 00:15:34.050 SGL Metadata Address: Not Supported 00:15:34.050 SGL Offset: Not Supported 00:15:34.050 Transport SGL Data Block: Not Supported 00:15:34.050 Replay Protected Memory Block: Not Supported 00:15:34.050 00:15:34.050 Firmware Slot Information 00:15:34.050 ========================= 00:15:34.050 Active slot: 1 00:15:34.050 Slot 1 Firmware Revision: 24.09 00:15:34.050 00:15:34.050 00:15:34.050 Commands Supported and Effects 00:15:34.050 ============================== 00:15:34.050 Admin Commands 00:15:34.050 -------------- 00:15:34.050 Get Log Page (02h): Supported 00:15:34.050 Identify (06h): Supported 00:15:34.050 Abort (08h): Supported 00:15:34.050 Set Features (09h): Supported 00:15:34.050 Get Features (0Ah): Supported 00:15:34.050 Asynchronous Event Request (0Ch): Supported 00:15:34.050 Keep Alive (18h): Supported 00:15:34.050 I/O Commands 00:15:34.050 ------------ 00:15:34.050 Flush (00h): Supported LBA-Change 00:15:34.050 Write (01h): Supported LBA-Change 00:15:34.050 Read (02h): Supported 00:15:34.050 Compare (05h): Supported 00:15:34.050 Write Zeroes (08h): Supported LBA-Change 00:15:34.050 Dataset Management (09h): Supported LBA-Change 00:15:34.050 Copy (19h): Supported LBA-Change 00:15:34.050 00:15:34.050 Error Log 00:15:34.050 ========= 00:15:34.050 00:15:34.050 Arbitration 00:15:34.050 =========== 00:15:34.050 Arbitration Burst: 1 00:15:34.050 00:15:34.050 Power Management 00:15:34.050 ================ 00:15:34.050 Number of Power States: 1 00:15:34.050 Current Power State: Power State #0 00:15:34.050 Power State #0: 00:15:34.050 Max Power: 0.00 W 00:15:34.050 Non-Operational State: Operational 00:15:34.050 Entry Latency: Not Reported 00:15:34.050 Exit Latency: Not Reported 00:15:34.050 Relative Read Throughput: 0 00:15:34.050 Relative Read Latency: 0 00:15:34.050 Relative Write Throughput: 0 00:15:34.050 Relative Write Latency: 0 00:15:34.050 Idle Power: Not Reported 00:15:34.050 Active Power: Not Reported 00:15:34.050 Non-Operational Permissive Mode: Not Supported 00:15:34.050 00:15:34.050 Health Information 00:15:34.050 ================== 00:15:34.050 Critical Warnings: 00:15:34.050 Available Spare Space: OK 00:15:34.050 Temperature: OK 00:15:34.050 Device Reliability: OK 00:15:34.050 Read Only: No 00:15:34.050 Volatile Memory Backup: OK 00:15:34.050 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:34.050 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:34.050 Available Spare: 0% 00:15:34.050 Available Sp[2024-07-15 20:21:12.497071] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:34.050 [2024-07-15 20:21:12.504891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:34.050 [2024-07-15 20:21:12.504940] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:34.050 [2024-07-15 20:21:12.504963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.050 [2024-07-15 20:21:12.504975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.050 [2024-07-15 20:21:12.504985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.050 [2024-07-15 20:21:12.504995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.050 [2024-07-15 20:21:12.505060] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:34.050 [2024-07-15 20:21:12.505081] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:34.050 [2024-07-15 20:21:12.506069] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.050 [2024-07-15 20:21:12.506142] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:34.050 [2024-07-15 20:21:12.506172] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:34.050 [2024-07-15 20:21:12.507074] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:34.050 [2024-07-15 20:21:12.507098] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:34.050 [2024-07-15 20:21:12.507148] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:34.050 [2024-07-15 20:21:12.508331] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:34.050 are Threshold: 0% 00:15:34.050 Life Percentage Used: 0% 00:15:34.050 Data Units Read: 0 00:15:34.050 Data Units Written: 0 00:15:34.050 Host Read Commands: 0 00:15:34.050 Host Write Commands: 0 00:15:34.050 Controller Busy Time: 0 minutes 00:15:34.050 Power Cycles: 0 00:15:34.050 Power On Hours: 0 hours 00:15:34.050 Unsafe Shutdowns: 0 00:15:34.050 Unrecoverable Media Errors: 0 00:15:34.050 Lifetime Error Log Entries: 0 00:15:34.050 Warning Temperature Time: 0 minutes 00:15:34.050 Critical Temperature Time: 0 minutes 00:15:34.050 00:15:34.050 Number of Queues 00:15:34.050 ================ 00:15:34.050 Number of I/O Submission Queues: 127 00:15:34.050 Number of I/O Completion Queues: 127 00:15:34.050 00:15:34.050 Active Namespaces 00:15:34.050 ================= 00:15:34.050 Namespace ID:1 00:15:34.050 Error Recovery Timeout: Unlimited 00:15:34.050 Command Set Identifier: NVM (00h) 00:15:34.050 Deallocate: Supported 00:15:34.050 Deallocated/Unwritten Error: Not Supported 00:15:34.050 Deallocated Read Value: Unknown 00:15:34.050 Deallocate in Write Zeroes: Not Supported 00:15:34.050 Deallocated Guard Field: 0xFFFF 00:15:34.050 Flush: Supported 00:15:34.050 Reservation: Supported 00:15:34.050 Namespace Sharing Capabilities: Multiple Controllers 00:15:34.050 Size (in LBAs): 131072 (0GiB) 00:15:34.050 Capacity (in LBAs): 131072 (0GiB) 00:15:34.050 Utilization (in LBAs): 131072 (0GiB) 00:15:34.050 NGUID: 7F41D0989EE04CF58EDD796C2383C18D 00:15:34.050 UUID: 7f41d098-9ee0-4cf5-8edd-796c2383c18d 00:15:34.050 Thin Provisioning: Not Supported 00:15:34.050 Per-NS Atomic Units: Yes 00:15:34.050 Atomic Boundary Size (Normal): 0 00:15:34.050 Atomic Boundary Size (PFail): 0 00:15:34.050 Atomic Boundary Offset: 0 00:15:34.050 Maximum Single Source Range Length: 65535 00:15:34.050 Maximum Copy Length: 65535 00:15:34.050 Maximum Source Range Count: 1 00:15:34.050 NGUID/EUI64 Never Reused: No 00:15:34.050 Namespace Write Protected: No 00:15:34.050 Number of LBA Formats: 1 00:15:34.050 Current LBA Format: LBA Format #00 00:15:34.050 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:34.050 00:15:34.050 20:21:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:34.310 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.310 [2024-07-15 20:21:12.740671] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.582 Initializing NVMe Controllers 00:15:39.582 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:39.582 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:39.582 Initialization complete. Launching workers. 00:15:39.583 ======================================================== 00:15:39.583 Latency(us) 00:15:39.583 Device Information : IOPS MiB/s Average min max 00:15:39.583 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34196.65 133.58 3742.23 1185.51 8627.58 00:15:39.583 ======================================================== 00:15:39.583 Total : 34196.65 133.58 3742.23 1185.51 8627.58 00:15:39.583 00:15:39.583 [2024-07-15 20:21:17.853225] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:39.583 20:21:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:39.583 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.583 [2024-07-15 20:21:18.084874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:44.854 Initializing NVMe Controllers 00:15:44.854 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:44.854 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:44.854 Initialization complete. Launching workers. 00:15:44.854 ======================================================== 00:15:44.854 Latency(us) 00:15:44.854 Device Information : IOPS MiB/s Average min max 00:15:44.854 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31653.20 123.65 4043.40 1203.96 9259.12 00:15:44.854 ======================================================== 00:15:44.854 Total : 31653.20 123.65 4043.40 1203.96 9259.12 00:15:44.854 00:15:44.854 [2024-07-15 20:21:23.106015] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:44.854 20:21:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:44.854 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.854 [2024-07-15 20:21:23.316883] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:50.121 [2024-07-15 20:21:28.465024] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:50.121 Initializing NVMe Controllers 00:15:50.121 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:50.121 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:50.121 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:50.121 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:50.121 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:50.121 Initialization complete. Launching workers. 00:15:50.121 Starting thread on core 2 00:15:50.121 Starting thread on core 3 00:15:50.121 Starting thread on core 1 00:15:50.121 20:21:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:50.121 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.377 [2024-07-15 20:21:28.774484] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:54.570 [2024-07-15 20:21:32.206725] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:54.570 Initializing NVMe Controllers 00:15:54.570 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:54.570 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:54.570 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:54.570 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:54.570 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:54.570 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:54.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:54.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:54.570 Initialization complete. Launching workers. 00:15:54.570 Starting thread on core 1 with urgent priority queue 00:15:54.570 Starting thread on core 2 with urgent priority queue 00:15:54.570 Starting thread on core 3 with urgent priority queue 00:15:54.570 Starting thread on core 0 with urgent priority queue 00:15:54.570 SPDK bdev Controller (SPDK2 ) core 0: 3338.67 IO/s 29.95 secs/100000 ios 00:15:54.570 SPDK bdev Controller (SPDK2 ) core 1: 4640.00 IO/s 21.55 secs/100000 ios 00:15:54.570 SPDK bdev Controller (SPDK2 ) core 2: 4673.00 IO/s 21.40 secs/100000 ios 00:15:54.570 SPDK bdev Controller (SPDK2 ) core 3: 4846.67 IO/s 20.63 secs/100000 ios 00:15:54.570 ======================================================== 00:15:54.570 00:15:54.570 20:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:54.570 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.570 [2024-07-15 20:21:32.502949] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:54.570 Initializing NVMe Controllers 00:15:54.570 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:54.570 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:54.570 Namespace ID: 1 size: 0GB 00:15:54.570 Initialization complete. 00:15:54.570 INFO: using host memory buffer for IO 00:15:54.570 Hello world! 00:15:54.570 [2024-07-15 20:21:32.517150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:54.570 20:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:54.570 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.570 [2024-07-15 20:21:32.798751] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:55.506 Initializing NVMe Controllers 00:15:55.506 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.506 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.506 Initialization complete. Launching workers. 00:15:55.506 submit (in ns) avg, min, max = 7826.8, 3518.9, 4016676.7 00:15:55.506 complete (in ns) avg, min, max = 26195.5, 2056.7, 7989852.2 00:15:55.506 00:15:55.506 Submit histogram 00:15:55.506 ================ 00:15:55.506 Range in us Cumulative Count 00:15:55.506 3.508 - 3.532: 0.2326% ( 31) 00:15:55.506 3.532 - 3.556: 0.8853% ( 87) 00:15:55.506 3.556 - 3.579: 2.8284% ( 259) 00:15:55.506 3.579 - 3.603: 6.8647% ( 538) 00:15:55.506 3.603 - 3.627: 13.9995% ( 951) 00:15:55.506 3.627 - 3.650: 23.5126% ( 1268) 00:15:55.506 3.650 - 3.674: 34.3387% ( 1443) 00:15:55.506 3.674 - 3.698: 42.2462% ( 1054) 00:15:55.506 3.698 - 3.721: 49.7112% ( 995) 00:15:55.506 3.721 - 3.745: 54.4827% ( 636) 00:15:55.506 3.745 - 3.769: 58.9617% ( 597) 00:15:55.506 3.769 - 3.793: 62.6003% ( 485) 00:15:55.506 3.793 - 3.816: 65.9089% ( 441) 00:15:55.506 3.816 - 3.840: 69.2175% ( 441) 00:15:55.506 3.840 - 3.864: 72.8262% ( 481) 00:15:55.506 3.864 - 3.887: 76.8625% ( 538) 00:15:55.506 3.887 - 3.911: 81.0713% ( 561) 00:15:55.506 3.911 - 3.935: 84.4775% ( 454) 00:15:55.506 3.935 - 3.959: 87.0208% ( 339) 00:15:55.506 3.959 - 3.982: 89.0689% ( 273) 00:15:55.506 3.982 - 4.006: 90.6295% ( 208) 00:15:55.506 4.006 - 4.030: 91.8148% ( 158) 00:15:55.506 4.030 - 4.053: 92.8277% ( 135) 00:15:55.506 4.053 - 4.077: 93.7880% ( 128) 00:15:55.506 4.077 - 4.101: 94.5757% ( 105) 00:15:55.506 4.101 - 4.124: 95.1984% ( 83) 00:15:55.506 4.124 - 4.148: 95.8136% ( 82) 00:15:55.506 4.148 - 4.172: 96.2263% ( 55) 00:15:55.506 4.172 - 4.196: 96.4814% ( 34) 00:15:55.506 4.196 - 4.219: 96.7064% ( 30) 00:15:55.506 4.219 - 4.243: 96.8640% ( 21) 00:15:55.506 4.243 - 4.267: 97.0065% ( 19) 00:15:55.506 4.267 - 4.290: 97.1191% ( 15) 00:15:55.506 4.290 - 4.314: 97.1866% ( 9) 00:15:55.506 4.314 - 4.338: 97.2841% ( 13) 00:15:55.506 4.338 - 4.361: 97.3816% ( 13) 00:15:55.506 4.361 - 4.385: 97.4492% ( 9) 00:15:55.506 4.385 - 4.409: 97.4867% ( 5) 00:15:55.506 4.409 - 4.433: 97.5392% ( 7) 00:15:55.506 4.433 - 4.456: 97.5692% ( 4) 00:15:55.506 4.456 - 4.480: 97.5992% ( 4) 00:15:55.506 4.480 - 4.504: 97.6142% ( 2) 00:15:55.506 4.504 - 4.527: 97.6217% ( 1) 00:15:55.506 4.527 - 4.551: 97.6292% ( 1) 00:15:55.506 4.551 - 4.575: 97.6367% ( 1) 00:15:55.506 4.575 - 4.599: 97.6592% ( 3) 00:15:55.506 4.599 - 4.622: 97.6742% ( 2) 00:15:55.506 4.622 - 4.646: 97.6817% ( 1) 00:15:55.506 4.646 - 4.670: 97.6892% ( 1) 00:15:55.506 4.670 - 4.693: 97.6968% ( 1) 00:15:55.506 4.693 - 4.717: 97.7418% ( 6) 00:15:55.506 4.717 - 4.741: 97.7643% ( 3) 00:15:55.506 4.741 - 4.764: 97.7943% ( 4) 00:15:55.506 4.764 - 4.788: 97.8468% ( 7) 00:15:55.506 4.788 - 4.812: 97.8768% ( 4) 00:15:55.506 4.812 - 4.836: 97.8993% ( 3) 00:15:55.506 4.836 - 4.859: 97.9293% ( 4) 00:15:55.506 4.859 - 4.883: 97.9668% ( 5) 00:15:55.506 4.883 - 4.907: 98.0119% ( 6) 00:15:55.506 4.907 - 4.930: 98.0494% ( 5) 00:15:55.506 4.930 - 4.954: 98.1019% ( 7) 00:15:55.506 4.954 - 4.978: 98.1469% ( 6) 00:15:55.506 4.978 - 5.001: 98.1694% ( 3) 00:15:55.506 5.001 - 5.025: 98.2069% ( 5) 00:15:55.506 5.025 - 5.049: 98.2444% ( 5) 00:15:55.506 5.049 - 5.073: 98.2669% ( 3) 00:15:55.506 5.073 - 5.096: 98.2819% ( 2) 00:15:55.506 5.096 - 5.120: 98.2894% ( 1) 00:15:55.506 5.120 - 5.144: 98.3044% ( 2) 00:15:55.506 5.144 - 5.167: 98.3345% ( 4) 00:15:55.506 5.167 - 5.191: 98.3420% ( 1) 00:15:55.506 5.191 - 5.215: 98.3645% ( 3) 00:15:55.506 5.215 - 5.239: 98.3945% ( 4) 00:15:55.506 5.239 - 5.262: 98.4095% ( 2) 00:15:55.506 5.262 - 5.286: 98.4170% ( 1) 00:15:55.506 5.286 - 5.310: 98.4245% ( 1) 00:15:55.506 5.333 - 5.357: 98.4320% ( 1) 00:15:55.506 5.404 - 5.428: 98.4470% ( 2) 00:15:55.506 5.428 - 5.452: 98.4545% ( 1) 00:15:55.506 5.499 - 5.523: 98.4695% ( 2) 00:15:55.506 5.547 - 5.570: 98.4770% ( 1) 00:15:55.506 5.570 - 5.594: 98.4845% ( 1) 00:15:55.506 5.713 - 5.736: 98.4920% ( 1) 00:15:55.506 5.926 - 5.950: 98.4995% ( 1) 00:15:55.506 6.116 - 6.163: 98.5070% ( 1) 00:15:55.506 6.258 - 6.305: 98.5145% ( 1) 00:15:55.506 6.305 - 6.353: 98.5220% ( 1) 00:15:55.506 6.400 - 6.447: 98.5295% ( 1) 00:15:55.506 6.590 - 6.637: 98.5370% ( 1) 00:15:55.506 6.732 - 6.779: 98.5520% ( 2) 00:15:55.506 6.874 - 6.921: 98.5595% ( 1) 00:15:55.506 6.969 - 7.016: 98.5670% ( 1) 00:15:55.506 7.016 - 7.064: 98.5820% ( 2) 00:15:55.506 7.064 - 7.111: 98.5970% ( 2) 00:15:55.506 7.159 - 7.206: 98.6045% ( 1) 00:15:55.506 7.206 - 7.253: 98.6120% ( 1) 00:15:55.506 7.253 - 7.301: 98.6196% ( 1) 00:15:55.506 7.301 - 7.348: 98.6271% ( 1) 00:15:55.506 7.396 - 7.443: 98.6421% ( 2) 00:15:55.506 7.443 - 7.490: 98.6496% ( 1) 00:15:55.506 7.490 - 7.538: 98.6646% ( 2) 00:15:55.506 7.538 - 7.585: 98.6796% ( 2) 00:15:55.506 7.585 - 7.633: 98.6946% ( 2) 00:15:55.506 7.775 - 7.822: 98.7096% ( 2) 00:15:55.506 7.822 - 7.870: 98.7246% ( 2) 00:15:55.506 7.870 - 7.917: 98.7396% ( 2) 00:15:55.506 7.917 - 7.964: 98.7471% ( 1) 00:15:55.506 7.964 - 8.012: 98.7546% ( 1) 00:15:55.506 8.012 - 8.059: 98.7621% ( 1) 00:15:55.506 8.059 - 8.107: 98.7696% ( 1) 00:15:55.506 8.107 - 8.154: 98.7771% ( 1) 00:15:55.506 8.154 - 8.201: 98.7921% ( 2) 00:15:55.506 8.249 - 8.296: 98.8071% ( 2) 00:15:55.506 8.344 - 8.391: 98.8146% ( 1) 00:15:55.506 8.439 - 8.486: 98.8296% ( 2) 00:15:55.506 8.581 - 8.628: 98.8371% ( 1) 00:15:55.506 8.676 - 8.723: 98.8521% ( 2) 00:15:55.506 8.723 - 8.770: 98.8596% ( 1) 00:15:55.506 8.770 - 8.818: 98.8746% ( 2) 00:15:55.506 8.818 - 8.865: 98.8821% ( 1) 00:15:55.506 8.865 - 8.913: 98.8896% ( 1) 00:15:55.506 8.913 - 8.960: 98.8971% ( 1) 00:15:55.506 8.960 - 9.007: 98.9196% ( 3) 00:15:55.506 9.387 - 9.434: 98.9272% ( 1) 00:15:55.506 9.481 - 9.529: 98.9347% ( 1) 00:15:55.506 9.576 - 9.624: 98.9422% ( 1) 00:15:55.506 9.671 - 9.719: 98.9497% ( 1) 00:15:55.506 10.050 - 10.098: 98.9572% ( 1) 00:15:55.506 10.193 - 10.240: 98.9647% ( 1) 00:15:55.506 10.382 - 10.430: 98.9797% ( 2) 00:15:55.506 10.619 - 10.667: 98.9872% ( 1) 00:15:55.506 11.330 - 11.378: 99.0022% ( 2) 00:15:55.506 11.378 - 11.425: 99.0097% ( 1) 00:15:55.506 11.899 - 11.947: 99.0172% ( 1) 00:15:55.506 12.089 - 12.136: 99.0247% ( 1) 00:15:55.506 12.136 - 12.231: 99.0322% ( 1) 00:15:55.506 12.231 - 12.326: 99.0397% ( 1) 00:15:55.506 12.326 - 12.421: 99.0472% ( 1) 00:15:55.506 12.421 - 12.516: 99.0547% ( 1) 00:15:55.506 12.895 - 12.990: 99.0622% ( 1) 00:15:55.506 13.084 - 13.179: 99.0697% ( 1) 00:15:55.506 13.274 - 13.369: 99.0772% ( 1) 00:15:55.507 13.843 - 13.938: 99.0847% ( 1) 00:15:55.507 14.033 - 14.127: 99.0997% ( 2) 00:15:55.507 14.222 - 14.317: 99.1072% ( 1) 00:15:55.507 14.601 - 14.696: 99.1222% ( 2) 00:15:55.507 15.076 - 15.170: 99.1297% ( 1) 00:15:55.507 17.161 - 17.256: 99.1447% ( 2) 00:15:55.507 17.256 - 17.351: 99.1522% ( 1) 00:15:55.507 17.351 - 17.446: 99.1672% ( 2) 00:15:55.507 17.446 - 17.541: 99.1897% ( 3) 00:15:55.507 17.541 - 17.636: 99.2197% ( 4) 00:15:55.507 17.636 - 17.730: 99.2798% ( 8) 00:15:55.507 17.730 - 17.825: 99.3473% ( 9) 00:15:55.507 17.825 - 17.920: 99.3548% ( 1) 00:15:55.507 17.920 - 18.015: 99.3923% ( 5) 00:15:55.507 18.015 - 18.110: 99.4298% ( 5) 00:15:55.507 18.110 - 18.204: 99.4673% ( 5) 00:15:55.507 18.204 - 18.299: 99.5499% ( 11) 00:15:55.507 18.299 - 18.394: 99.5649% ( 2) 00:15:55.507 18.394 - 18.489: 99.6099% ( 6) 00:15:55.507 18.489 - 18.584: 99.6699% ( 8) 00:15:55.507 18.584 - 18.679: 99.6849% ( 2) 00:15:55.507 18.679 - 18.773: 99.7299% ( 6) 00:15:55.507 18.773 - 18.868: 99.7749% ( 6) 00:15:55.507 18.868 - 18.963: 99.8199% ( 6) 00:15:55.507 19.058 - 19.153: 99.8274% ( 1) 00:15:55.507 19.247 - 19.342: 99.8500% ( 3) 00:15:55.507 19.342 - 19.437: 99.8575% ( 1) 00:15:55.507 19.911 - 20.006: 99.8650% ( 1) 00:15:55.507 20.006 - 20.101: 99.8725% ( 1) 00:15:55.507 20.290 - 20.385: 99.8800% ( 1) 00:15:55.507 21.902 - 21.997: 99.8875% ( 1) 00:15:55.507 23.135 - 23.230: 99.8950% ( 1) 00:15:55.507 29.013 - 29.203: 99.9025% ( 1) 00:15:55.507 3980.705 - 4004.978: 99.9700% ( 9) 00:15:55.507 4004.978 - 4029.250: 100.0000% ( 4) 00:15:55.507 00:15:55.507 Complete histogram 00:15:55.507 ================== 00:15:55.507 Range in us Cumulative Count 00:15:55.507 2.050 - 2.062: 0.6902% ( 92) 00:15:55.507 2.062 - 2.074: 37.0095% ( 4841) 00:15:55.507 2.074 - 2.086: 48.0231% ( 1468) 00:15:55.507 2.086 - 2.098: 50.6865% ( 355) 00:15:55.507 2.098 - 2.110: 59.5544% ( 1182) 00:15:55.507 2.110 - 2.121: 62.2702% ( 362) 00:15:55.507 2.121 - 2.133: 65.6763% ( 454) 00:15:55.507 2.133 - 2.145: 75.4145% ( 1298) 00:15:55.507 2.145 - 2.157: 77.4177% ( 267) 00:15:55.507 2.157 - 2.169: 79.1882% ( 236) 00:15:55.507 2.169 - 2.181: 82.3468% ( 421) 00:15:55.507 2.181 - 2.193: 83.3746% ( 137) 00:15:55.507 2.193 - 2.204: 84.5900% ( 162) 00:15:55.507 2.204 - 2.216: 88.5738% ( 531) 00:15:55.507 2.216 - 2.228: 91.0571% ( 331) 00:15:55.507 2.228 - 2.240: 92.3025% ( 166) 00:15:55.507 2.240 - 2.252: 93.5404% ( 165) 00:15:55.507 2.252 - 2.264: 94.0281% ( 65) 00:15:55.507 2.264 - 2.276: 94.2231% ( 26) 00:15:55.507 2.276 - 2.287: 94.5682% ( 46) 00:15:55.507 2.287 - 2.299: 95.2810% ( 95) 00:15:55.507 2.299 - 2.311: 95.6036% ( 43) 00:15:55.507 2.311 - 2.323: 95.7011% ( 13) 00:15:55.507 2.323 - 2.335: 95.7461% ( 6) 00:15:55.507 2.335 - 2.347: 95.8512% ( 14) 00:15:55.507 2.347 - 2.359: 96.0462% ( 26) 00:15:55.507 2.359 - 2.370: 96.4964% ( 60) 00:15:55.507 2.370 - 2.382: 96.9540% ( 61) 00:15:55.507 2.382 - 2.394: 97.2691% ( 42) 00:15:55.507 2.394 - 2.406: 97.5767% ( 41) 00:15:55.507 2.406 - 2.418: 97.7493% ( 23) 00:15:55.507 2.418 - 2.430: 97.9368% ( 25) 00:15:55.507 2.430 - 2.441: 98.0569% ( 16) 00:15:55.507 2.441 - 2.453: 98.1694% ( 15) 00:15:55.507 2.453 - 2.465: 98.2819% ( 15) 00:15:55.507 2.465 - 2.477: 98.3870% ( 14) 00:15:55.507 2.477 - 2.489: 98.4545% ( 9) 00:15:55.507 2.489 - 2.501: 98.4995% ( 6) 00:15:55.507 2.501 - 2.513: 98.5370% ( 5) 00:15:55.507 2.513 - 2.524: 98.5670% ( 4) 00:15:55.507 2.524 - 2.536: 98.5970% ( 4) 00:15:55.507 2.536 - 2.548: 98.6271% ( 4) 00:15:55.507 2.548 - 2.560: 98.6346% ( 1) 00:15:55.507 2.584 - 2.596: 98.6421% ( 1) 00:15:55.507 2.619 - 2.631: 98.6496% ( 1) 00:15:55.507 2.631 - 2.643: 98.6571% ( 1) 00:15:55.507 2.643 - 2.655: 98.6646% ( 1) 00:15:55.507 2.726 - 2.738: 98.6721% ( 1) 00:15:55.507 3.271 - 3.295: 98.6796% ( 1) 00:15:55.507 3.366 - 3.390: 98.6946% ( 2) 00:15:55.507 3.390 - 3.413: 98.7021% ( 1) 00:15:55.507 3.413 - 3.437: 98.7171% ( 2) 00:15:55.507 3.437 - 3.461: 98.7321% ( 2) 00:15:55.507 3.484 - 3.508: 98.7471% ( 2) 00:15:55.507 3.508 - 3.532: 98.7621% ( 2) 00:15:55.507 3.532 - 3.556: 98.7846% ( 3) 00:15:55.507 3.556 - 3.579: 98.8071% ( 3) 00:15:55.507 3.579 - 3.603: 98.8221% ( 2) 00:15:55.507 3.603 - 3.627: 98.8296% ( 1) 00:15:55.507 3.650 - 3.674: 98.8371% ( 1) 00:15:55.507 3.698 - 3.721: 98.8596% ( 3) 00:15:55.507 3.745 - 3.769: 98.8671% ( 1) 00:15:55.507 3.816 - 3.840: 98.8746% ( 1) 00:15:55.507 4.954 - 4.978: 98.8821% ( 1) 00:15:55.507 4.978 - 5.001: 98.8896% ( 1) 00:15:55.507 5.191 - 5.215: 98.8971% ( 1) 00:15:55.507 5.452 - 5.476: 98.9046% ( 1) 00:15:55.507 5.689 - 5.713: 98.9121% ( 1) 00:15:55.507 5.713 - 5.736: 98.9196% ( 1) 00:15:55.507 5.831 - 5.855: 98.9272% ( 1) 00:15:55.507 6.305 - 6.353: 98.9347% ( 1) 00:15:55.507 6.353 - 6.400: 98.9422% ( 1) 00:15:55.507 6.447 - 6.495: 98.9497% ( 1) 00:15:55.507 6.495 - 6.542: 98.9647% ( 2) 00:15:55.507 6.590 - 6.637: 9[2024-07-15 20:21:33.890642] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:55.507 8.9722% ( 1) 00:15:55.507 7.064 - 7.111: 98.9797% ( 1) 00:15:55.507 7.633 - 7.680: 98.9872% ( 1) 00:15:55.507 15.550 - 15.644: 98.9947% ( 1) 00:15:55.507 15.644 - 15.739: 99.0097% ( 2) 00:15:55.507 15.929 - 16.024: 99.0472% ( 5) 00:15:55.507 16.024 - 16.119: 99.0697% ( 3) 00:15:55.507 16.119 - 16.213: 99.0922% ( 3) 00:15:55.507 16.213 - 16.308: 99.1072% ( 2) 00:15:55.507 16.308 - 16.403: 99.1147% ( 1) 00:15:55.507 16.403 - 16.498: 99.1522% ( 5) 00:15:55.507 16.498 - 16.593: 99.1897% ( 5) 00:15:55.507 16.593 - 16.687: 99.2047% ( 2) 00:15:55.507 16.687 - 16.782: 99.2423% ( 5) 00:15:55.507 16.782 - 16.877: 99.2723% ( 4) 00:15:55.507 16.877 - 16.972: 99.3023% ( 4) 00:15:55.507 16.972 - 17.067: 99.3248% ( 3) 00:15:55.507 17.067 - 17.161: 99.3398% ( 2) 00:15:55.507 17.161 - 17.256: 99.3623% ( 3) 00:15:55.507 17.256 - 17.351: 99.3698% ( 1) 00:15:55.507 17.446 - 17.541: 99.3848% ( 2) 00:15:55.507 17.730 - 17.825: 99.3923% ( 1) 00:15:55.507 18.110 - 18.204: 99.3998% ( 1) 00:15:55.507 25.410 - 25.600: 99.4073% ( 1) 00:15:55.507 32.616 - 32.806: 99.4148% ( 1) 00:15:55.507 3859.342 - 3883.615: 99.4223% ( 1) 00:15:55.507 3980.705 - 4004.978: 99.8424% ( 56) 00:15:55.507 4004.978 - 4029.250: 99.9850% ( 19) 00:15:55.507 7961.410 - 8009.956: 100.0000% ( 2) 00:15:55.507 00:15:55.507 20:21:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:55.507 20:21:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:55.507 20:21:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:55.507 20:21:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:55.507 20:21:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:55.764 [ 00:15:55.764 { 00:15:55.764 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:55.764 "subtype": "Discovery", 00:15:55.764 "listen_addresses": [], 00:15:55.764 "allow_any_host": true, 00:15:55.764 "hosts": [] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:55.764 "subtype": "NVMe", 00:15:55.764 "listen_addresses": [ 00:15:55.764 { 00:15:55.764 "trtype": "VFIOUSER", 00:15:55.764 "adrfam": "IPv4", 00:15:55.764 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:55.764 "trsvcid": "0" 00:15:55.764 } 00:15:55.764 ], 00:15:55.764 "allow_any_host": true, 00:15:55.764 "hosts": [], 00:15:55.764 "serial_number": "SPDK1", 00:15:55.764 "model_number": "SPDK bdev Controller", 00:15:55.764 "max_namespaces": 32, 00:15:55.764 "min_cntlid": 1, 00:15:55.764 "max_cntlid": 65519, 00:15:55.764 "namespaces": [ 00:15:55.764 { 00:15:55.764 "nsid": 1, 00:15:55.764 "bdev_name": "Malloc1", 00:15:55.764 "name": "Malloc1", 00:15:55.764 "nguid": "9D1FE9D97B1B475BB7BF7F38F211AC1F", 00:15:55.764 "uuid": "9d1fe9d9-7b1b-475b-b7bf-7f38f211ac1f" 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "nsid": 2, 00:15:55.764 "bdev_name": "Malloc3", 00:15:55.764 "name": "Malloc3", 00:15:55.764 "nguid": "C22F8FE2FC9F437DADE68F3EC6DBFE68", 00:15:55.764 "uuid": "c22f8fe2-fc9f-437d-ade6-8f3ec6dbfe68" 00:15:55.764 } 00:15:55.764 ] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:55.764 "subtype": "NVMe", 00:15:55.764 "listen_addresses": [ 00:15:55.764 { 00:15:55.764 "trtype": "VFIOUSER", 00:15:55.764 "adrfam": "IPv4", 00:15:55.764 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:55.764 "trsvcid": "0" 00:15:55.764 } 00:15:55.764 ], 00:15:55.764 "allow_any_host": true, 00:15:55.764 "hosts": [], 00:15:55.764 "serial_number": "SPDK2", 00:15:55.764 "model_number": "SPDK bdev Controller", 00:15:55.764 "max_namespaces": 32, 00:15:55.764 "min_cntlid": 1, 00:15:55.764 "max_cntlid": 65519, 00:15:55.764 "namespaces": [ 00:15:55.764 { 00:15:55.764 "nsid": 1, 00:15:55.764 "bdev_name": "Malloc2", 00:15:55.764 "name": "Malloc2", 00:15:55.764 "nguid": "7F41D0989EE04CF58EDD796C2383C18D", 00:15:55.764 "uuid": "7f41d098-9ee0-4cf5-8edd-796c2383c18d" 00:15:55.764 } 00:15:55.764 ] 00:15:55.764 } 00:15:55.764 ] 00:15:55.764 20:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:55.764 20:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4018747 00:15:55.764 20:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:55.764 20:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:55.764 20:21:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:55.764 20:21:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:55.764 20:21:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:55.764 20:21:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:55.764 20:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:55.764 20:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:55.764 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.049 [2024-07-15 20:21:34.347568] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:56.049 Malloc4 00:15:56.049 20:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:56.306 [2024-07-15 20:21:34.720404] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:56.306 20:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:56.306 Asynchronous Event Request test 00:15:56.306 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:56.306 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:56.306 Registering asynchronous event callbacks... 00:15:56.306 Starting namespace attribute notice tests for all controllers... 00:15:56.306 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:56.306 aer_cb - Changed Namespace 00:15:56.306 Cleaning up... 00:15:56.566 [ 00:15:56.566 { 00:15:56.566 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:56.566 "subtype": "Discovery", 00:15:56.566 "listen_addresses": [], 00:15:56.566 "allow_any_host": true, 00:15:56.566 "hosts": [] 00:15:56.566 }, 00:15:56.566 { 00:15:56.566 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:56.566 "subtype": "NVMe", 00:15:56.566 "listen_addresses": [ 00:15:56.566 { 00:15:56.566 "trtype": "VFIOUSER", 00:15:56.566 "adrfam": "IPv4", 00:15:56.566 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:56.566 "trsvcid": "0" 00:15:56.566 } 00:15:56.566 ], 00:15:56.566 "allow_any_host": true, 00:15:56.566 "hosts": [], 00:15:56.566 "serial_number": "SPDK1", 00:15:56.566 "model_number": "SPDK bdev Controller", 00:15:56.566 "max_namespaces": 32, 00:15:56.566 "min_cntlid": 1, 00:15:56.566 "max_cntlid": 65519, 00:15:56.566 "namespaces": [ 00:15:56.566 { 00:15:56.566 "nsid": 1, 00:15:56.566 "bdev_name": "Malloc1", 00:15:56.566 "name": "Malloc1", 00:15:56.566 "nguid": "9D1FE9D97B1B475BB7BF7F38F211AC1F", 00:15:56.566 "uuid": "9d1fe9d9-7b1b-475b-b7bf-7f38f211ac1f" 00:15:56.566 }, 00:15:56.566 { 00:15:56.566 "nsid": 2, 00:15:56.566 "bdev_name": "Malloc3", 00:15:56.566 "name": "Malloc3", 00:15:56.566 "nguid": "C22F8FE2FC9F437DADE68F3EC6DBFE68", 00:15:56.566 "uuid": "c22f8fe2-fc9f-437d-ade6-8f3ec6dbfe68" 00:15:56.566 } 00:15:56.566 ] 00:15:56.566 }, 00:15:56.566 { 00:15:56.566 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:56.566 "subtype": "NVMe", 00:15:56.566 "listen_addresses": [ 00:15:56.566 { 00:15:56.566 "trtype": "VFIOUSER", 00:15:56.566 "adrfam": "IPv4", 00:15:56.566 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:56.566 "trsvcid": "0" 00:15:56.566 } 00:15:56.566 ], 00:15:56.566 "allow_any_host": true, 00:15:56.566 "hosts": [], 00:15:56.566 "serial_number": "SPDK2", 00:15:56.566 "model_number": "SPDK bdev Controller", 00:15:56.566 "max_namespaces": 32, 00:15:56.566 "min_cntlid": 1, 00:15:56.566 "max_cntlid": 65519, 00:15:56.566 "namespaces": [ 00:15:56.566 { 00:15:56.566 "nsid": 1, 00:15:56.566 "bdev_name": "Malloc2", 00:15:56.566 "name": "Malloc2", 00:15:56.566 "nguid": "7F41D0989EE04CF58EDD796C2383C18D", 00:15:56.566 "uuid": "7f41d098-9ee0-4cf5-8edd-796c2383c18d" 00:15:56.566 }, 00:15:56.566 { 00:15:56.566 "nsid": 2, 00:15:56.566 "bdev_name": "Malloc4", 00:15:56.566 "name": "Malloc4", 00:15:56.566 "nguid": "11CD228C2EA44844A2D7D2FCF73561E2", 00:15:56.566 "uuid": "11cd228c-2ea4-4844-a2d7-d2fcf73561e2" 00:15:56.566 } 00:15:56.566 ] 00:15:56.566 } 00:15:56.566 ] 00:15:56.566 20:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4018747 00:15:56.566 20:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:56.566 20:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4012528 00:15:56.566 20:21:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 4012528 ']' 00:15:56.566 20:21:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 4012528 00:15:56.566 20:21:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:56.566 20:21:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.566 20:21:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4012528 00:15:56.566 20:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:56.566 20:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:56.566 20:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4012528' 00:15:56.566 killing process with pid 4012528 00:15:56.566 20:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 4012528 00:15:56.566 20:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 4012528 00:15:56.824 20:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:56.824 20:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:56.824 20:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:56.825 20:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:56.825 20:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:56.825 20:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4018922 00:15:56.825 20:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:56.825 20:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4018922' 00:15:56.825 Process pid: 4018922 00:15:56.825 20:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:56.825 20:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4018922 00:15:56.825 20:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 4018922 ']' 00:15:56.825 20:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.825 20:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.825 20:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.825 20:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.825 20:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:57.082 [2024-07-15 20:21:35.395681] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:57.082 [2024-07-15 20:21:35.396846] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:15:57.082 [2024-07-15 20:21:35.396930] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.082 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.082 [2024-07-15 20:21:35.456043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.082 [2024-07-15 20:21:35.541036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.082 [2024-07-15 20:21:35.541092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.082 [2024-07-15 20:21:35.541114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.082 [2024-07-15 20:21:35.541125] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.082 [2024-07-15 20:21:35.541134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.082 [2024-07-15 20:21:35.541186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.082 [2024-07-15 20:21:35.541248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.082 [2024-07-15 20:21:35.541313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.082 [2024-07-15 20:21:35.541315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.339 [2024-07-15 20:21:35.637052] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:57.339 [2024-07-15 20:21:35.637220] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:57.339 [2024-07-15 20:21:35.637498] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:57.339 [2024-07-15 20:21:35.638037] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:57.340 [2024-07-15 20:21:35.638288] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:57.340 20:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.340 20:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:57.340 20:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:58.274 20:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:58.532 20:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:58.532 20:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:58.532 20:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:58.532 20:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:58.532 20:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:58.789 Malloc1 00:15:58.789 20:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:59.047 20:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:59.304 20:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:59.562 20:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:59.562 20:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:59.562 20:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:59.820 Malloc2 00:15:59.820 20:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:00.078 20:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:00.335 20:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:00.593 20:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:00.594 20:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4018922 00:16:00.594 20:21:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 4018922 ']' 00:16:00.594 20:21:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 4018922 00:16:00.594 20:21:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:00.594 20:21:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.594 20:21:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4018922 00:16:00.594 20:21:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:00.594 20:21:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:00.594 20:21:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4018922' 00:16:00.594 killing process with pid 4018922 00:16:00.594 20:21:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 4018922 00:16:00.594 20:21:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 4018922 00:16:01.161 20:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:01.161 20:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:01.161 00:16:01.161 real 0m52.809s 00:16:01.161 user 3m28.250s 00:16:01.161 sys 0m4.446s 00:16:01.161 20:21:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.161 20:21:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:01.161 ************************************ 00:16:01.161 END TEST nvmf_vfio_user 00:16:01.161 ************************************ 00:16:01.161 20:21:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:01.161 20:21:39 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:01.161 20:21:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:01.161 20:21:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.161 20:21:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.161 ************************************ 00:16:01.161 START TEST nvmf_vfio_user_nvme_compliance 00:16:01.161 ************************************ 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:01.162 * Looking for test storage... 00:16:01.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=4019466 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 4019466' 00:16:01.162 Process pid: 4019466 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 4019466 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 4019466 ']' 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.162 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:01.162 [2024-07-15 20:21:39.561000] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:16:01.162 [2024-07-15 20:21:39.561096] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.162 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.162 [2024-07-15 20:21:39.617939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:01.420 [2024-07-15 20:21:39.702229] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.420 [2024-07-15 20:21:39.702283] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.420 [2024-07-15 20:21:39.702306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.420 [2024-07-15 20:21:39.702317] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.420 [2024-07-15 20:21:39.702326] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.421 [2024-07-15 20:21:39.702418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.421 [2024-07-15 20:21:39.702483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.421 [2024-07-15 20:21:39.702486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.421 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.421 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:16:01.421 20:21:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:02.357 malloc0 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.357 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:02.617 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.618 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:02.618 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.618 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:02.618 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.618 20:21:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:02.618 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.618 00:16:02.618 00:16:02.618 CUnit - A unit testing framework for C - Version 2.1-3 00:16:02.618 http://cunit.sourceforge.net/ 00:16:02.618 00:16:02.618 00:16:02.618 Suite: nvme_compliance 00:16:02.618 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 20:21:41.049872] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.618 [2024-07-15 20:21:41.051346] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:02.618 [2024-07-15 20:21:41.051371] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:02.618 [2024-07-15 20:21:41.051383] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:02.618 [2024-07-15 20:21:41.052874] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.618 passed 00:16:02.618 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 20:21:41.139504] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.618 [2024-07-15 20:21:41.142529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.876 passed 00:16:02.876 Test: admin_identify_ns ...[2024-07-15 20:21:41.230494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.876 [2024-07-15 20:21:41.289909] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:02.876 [2024-07-15 20:21:41.297894] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:02.876 [2024-07-15 20:21:41.319002] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.876 passed 00:16:02.876 Test: admin_get_features_mandatory_features ...[2024-07-15 20:21:41.402650] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.876 [2024-07-15 20:21:41.405670] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.133 passed 00:16:03.133 Test: admin_get_features_optional_features ...[2024-07-15 20:21:41.490246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.133 [2024-07-15 20:21:41.493269] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.133 passed 00:16:03.133 Test: admin_set_features_number_of_queues ...[2024-07-15 20:21:41.575378] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.392 [2024-07-15 20:21:41.679975] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.392 passed 00:16:03.392 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 20:21:41.763589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.392 [2024-07-15 20:21:41.766617] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.392 passed 00:16:03.392 Test: admin_get_log_page_with_lpo ...[2024-07-15 20:21:41.847738] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.392 [2024-07-15 20:21:41.916894] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:03.650 [2024-07-15 20:21:41.929966] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.650 passed 00:16:03.650 Test: fabric_property_get ...[2024-07-15 20:21:42.012540] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.650 [2024-07-15 20:21:42.013806] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:03.650 [2024-07-15 20:21:42.015559] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.650 passed 00:16:03.650 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 20:21:42.099082] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.650 [2024-07-15 20:21:42.100427] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:03.650 [2024-07-15 20:21:42.102104] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.650 passed 00:16:03.909 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 20:21:42.185390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.909 [2024-07-15 20:21:42.268901] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:03.909 [2024-07-15 20:21:42.284884] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:03.909 [2024-07-15 20:21:42.289990] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.909 passed 00:16:03.909 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 20:21:42.373998] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.909 [2024-07-15 20:21:42.375286] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:03.909 [2024-07-15 20:21:42.377026] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.909 passed 00:16:04.167 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 20:21:42.460501] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.167 [2024-07-15 20:21:42.535892] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:04.167 [2024-07-15 20:21:42.562901] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:04.167 [2024-07-15 20:21:42.567979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.167 passed 00:16:04.167 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 20:21:42.649438] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.167 [2024-07-15 20:21:42.650740] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:04.167 [2024-07-15 20:21:42.650791] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:04.167 [2024-07-15 20:21:42.654471] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.167 passed 00:16:04.425 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 20:21:42.737451] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.425 [2024-07-15 20:21:42.828915] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:04.425 [2024-07-15 20:21:42.836902] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:04.425 [2024-07-15 20:21:42.844886] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:04.425 [2024-07-15 20:21:42.852901] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:04.425 [2024-07-15 20:21:42.881991] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.425 passed 00:16:04.681 Test: admin_create_io_sq_verify_pc ...[2024-07-15 20:21:42.965786] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.681 [2024-07-15 20:21:42.982897] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:04.681 [2024-07-15 20:21:43.000151] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.681 passed 00:16:04.681 Test: admin_create_io_qp_max_qps ...[2024-07-15 20:21:43.083692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.057 [2024-07-15 20:21:44.193895] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:06.057 [2024-07-15 20:21:44.578405] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.314 passed 00:16:06.314 Test: admin_create_io_sq_shared_cq ...[2024-07-15 20:21:44.663469] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.314 [2024-07-15 20:21:44.794897] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:06.314 [2024-07-15 20:21:44.831971] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.574 passed 00:16:06.574 00:16:06.574 Run Summary: Type Total Ran Passed Failed Inactive 00:16:06.574 suites 1 1 n/a 0 0 00:16:06.574 tests 18 18 18 0 0 00:16:06.574 asserts 360 360 360 0 n/a 00:16:06.574 00:16:06.574 Elapsed time = 1.569 seconds 00:16:06.574 20:21:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 4019466 00:16:06.574 20:21:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 4019466 ']' 00:16:06.574 20:21:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 4019466 00:16:06.574 20:21:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:16:06.574 20:21:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:06.574 20:21:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4019466 00:16:06.574 20:21:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:06.574 20:21:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:06.574 20:21:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4019466' 00:16:06.574 killing process with pid 4019466 00:16:06.574 20:21:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 4019466 00:16:06.574 20:21:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 4019466 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:06.833 00:16:06.833 real 0m5.720s 00:16:06.833 user 0m16.057s 00:16:06.833 sys 0m0.559s 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:06.833 ************************************ 00:16:06.833 END TEST nvmf_vfio_user_nvme_compliance 00:16:06.833 ************************************ 00:16:06.833 20:21:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:06.833 20:21:45 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:06.833 20:21:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:06.833 20:21:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.833 20:21:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:06.833 ************************************ 00:16:06.833 START TEST nvmf_vfio_user_fuzz 00:16:06.833 ************************************ 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:06.833 * Looking for test storage... 00:16:06.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=4020183 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 4020183' 00:16:06.833 Process pid: 4020183 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 4020183 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 4020183 ']' 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.833 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.092 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.092 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:16:07.092 20:21:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:08.482 malloc0 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:08.482 20:21:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:40.633 Fuzzing completed. Shutting down the fuzz application 00:16:40.633 00:16:40.633 Dumping successful admin opcodes: 00:16:40.633 8, 9, 10, 24, 00:16:40.633 Dumping successful io opcodes: 00:16:40.633 0, 00:16:40.633 NS: 0x200003a1ef00 I/O qp, Total commands completed: 619383, total successful commands: 2397, random_seed: 4008659456 00:16:40.633 NS: 0x200003a1ef00 admin qp, Total commands completed: 119829, total successful commands: 981, random_seed: 2893579776 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 4020183 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 4020183 ']' 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 4020183 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4020183 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4020183' 00:16:40.633 killing process with pid 4020183 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 4020183 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 4020183 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:40.633 00:16:40.633 real 0m32.237s 00:16:40.633 user 0m31.390s 00:16:40.633 sys 0m30.160s 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.633 20:22:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:40.633 ************************************ 00:16:40.633 END TEST nvmf_vfio_user_fuzz 00:16:40.633 ************************************ 00:16:40.633 20:22:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:40.633 20:22:17 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:40.633 20:22:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:40.633 20:22:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.633 20:22:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:40.633 ************************************ 00:16:40.634 START TEST nvmf_host_management 00:16:40.634 ************************************ 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:40.634 * Looking for test storage... 00:16:40.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:40.634 20:22:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:40.894 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.894 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:40.895 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:40.895 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:40.895 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.895 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:41.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:16:41.153 00:16:41.153 --- 10.0.0.2 ping statistics --- 00:16:41.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.153 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:16:41.153 00:16:41.153 --- 10.0.0.1 ping statistics --- 00:16:41.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.153 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=4025623 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 4025623 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4025623 ']' 00:16:41.153 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.154 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.154 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.154 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.154 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.154 [2024-07-15 20:22:19.579240] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:16:41.154 [2024-07-15 20:22:19.579347] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.154 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.154 [2024-07-15 20:22:19.648076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.412 [2024-07-15 20:22:19.739995] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.412 [2024-07-15 20:22:19.740055] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.412 [2024-07-15 20:22:19.740081] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.412 [2024-07-15 20:22:19.740096] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.412 [2024-07-15 20:22:19.740107] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.412 [2024-07-15 20:22:19.740201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.412 [2024-07-15 20:22:19.740316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.412 [2024-07-15 20:22:19.740382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:41.412 [2024-07-15 20:22:19.740384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.412 [2024-07-15 20:22:19.893787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.412 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.412 Malloc0 00:16:41.671 [2024-07-15 20:22:19.954935] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4025670 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4025670 /var/tmp/bdevperf.sock 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4025670 ']' 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.671 { 00:16:41.671 "params": { 00:16:41.671 "name": "Nvme$subsystem", 00:16:41.671 "trtype": "$TEST_TRANSPORT", 00:16:41.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.671 "adrfam": "ipv4", 00:16:41.671 "trsvcid": "$NVMF_PORT", 00:16:41.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.671 "hdgst": ${hdgst:-false}, 00:16:41.671 "ddgst": ${ddgst:-false} 00:16:41.671 }, 00:16:41.671 "method": "bdev_nvme_attach_controller" 00:16:41.671 } 00:16:41.671 EOF 00:16:41.671 )") 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:41.671 20:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:41.671 "params": { 00:16:41.671 "name": "Nvme0", 00:16:41.671 "trtype": "tcp", 00:16:41.671 "traddr": "10.0.0.2", 00:16:41.671 "adrfam": "ipv4", 00:16:41.671 "trsvcid": "4420", 00:16:41.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:41.671 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:41.671 "hdgst": false, 00:16:41.671 "ddgst": false 00:16:41.671 }, 00:16:41.671 "method": "bdev_nvme_attach_controller" 00:16:41.671 }' 00:16:41.672 [2024-07-15 20:22:20.034978] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:16:41.672 [2024-07-15 20:22:20.035065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4025670 ] 00:16:41.672 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.672 [2024-07-15 20:22:20.099970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.672 [2024-07-15 20:22:20.187552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.930 Running I/O for 10 seconds... 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.930 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.188 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:16:42.188 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:16:42.188 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:42.188 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:42.188 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:42.188 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:42.188 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:42.188 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.188 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:42.450 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.450 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=386 00:16:42.450 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 386 -ge 100 ']' 00:16:42.450 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:42.450 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:42.450 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:42.450 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:42.450 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.450 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:42.450 [2024-07-15 20:22:20.749482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6e20 is same with the state(5) to be set 00:16:42.450 [2024-07-15 20:22:20.750291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.450 [2024-07-15 20:22:20.750339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.450 [2024-07-15 20:22:20.750388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.450 [2024-07-15 20:22:20.750416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.450 [2024-07-15 20:22:20.750446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.450 [2024-07-15 20:22:20.750472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.450 [2024-07-15 20:22:20.750500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.450 [2024-07-15 20:22:20.750526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.450 [2024-07-15 20:22:20.750566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.450 [2024-07-15 20:22:20.750592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.450 [2024-07-15 20:22:20.750620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.450 [2024-07-15 20:22:20.750646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.450 [2024-07-15 20:22:20.750674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.450 [2024-07-15 20:22:20.750699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.450 [2024-07-15 20:22:20.750729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.450 [2024-07-15 20:22:20.750754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.450 [2024-07-15 20:22:20.750782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.450 [2024-07-15 20:22:20.750808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.450 [2024-07-15 20:22:20.750835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.450 [2024-07-15 20:22:20.750873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.450 [2024-07-15 20:22:20.750913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.450 [2024-07-15 20:22:20.750938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.450 [2024-07-15 20:22:20.750966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.450 [2024-07-15 20:22:20.750983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.450 [2024-07-15 20:22:20.750999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.450 [2024-07-15 20:22:20.751014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.751976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.751990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.752006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.752020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.752035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.752049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.451 [2024-07-15 20:22:20.752065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.451 [2024-07-15 20:22:20.752079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.452 [2024-07-15 20:22:20.752587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.752601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb39420 is same with the state(5) to be set 00:16:42.452 [2024-07-15 20:22:20.752670] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb39420 was disconnected and freed. reset controller. 00:16:42.452 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.452 [2024-07-15 20:22:20.753897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:42.452 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:42.452 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.452 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:42.452 task offset: 56064 on job bdev=Nvme0n1 fails 00:16:42.452 00:16:42.452 Latency(us) 00:16:42.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.452 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:42.452 Job: Nvme0n1 ended in about 0.38 seconds with error 00:16:42.452 Verification LBA range: start 0x0 length 0x400 00:16:42.452 Nvme0n1 : 0.38 1002.19 62.64 167.03 0.00 53226.87 3155.44 47574.28 00:16:42.452 =================================================================================================================== 00:16:42.452 Total : 1002.19 62.64 167.03 0.00 53226.87 3155.44 47574.28 00:16:42.452 [2024-07-15 20:22:20.755794] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:42.452 [2024-07-15 20:22:20.755835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3f000 (9): Bad file descriptor 00:16:42.452 [2024-07-15 20:22:20.757263] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:16:42.452 [2024-07-15 20:22:20.757510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:42.452 [2024-07-15 20:22:20.757539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.452 [2024-07-15 20:22:20.757566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:16:42.452 [2024-07-15 20:22:20.757582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:16:42.452 [2024-07-15 20:22:20.757595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:16:42.452 [2024-07-15 20:22:20.757607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb3f000 00:16:42.452 [2024-07-15 20:22:20.757641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3f000 (9): Bad file descriptor 00:16:42.452 [2024-07-15 20:22:20.757668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:42.452 [2024-07-15 20:22:20.757684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:42.452 [2024-07-15 20:22:20.757700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:42.452 [2024-07-15 20:22:20.757721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:42.452 20:22:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.452 20:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:43.390 20:22:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4025670 00:16:43.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4025670) - No such process 00:16:43.390 20:22:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:43.390 20:22:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:43.390 20:22:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:43.391 20:22:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:43.391 20:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:43.391 20:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:43.391 20:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:43.391 20:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:43.391 { 00:16:43.391 "params": { 00:16:43.391 "name": "Nvme$subsystem", 00:16:43.391 "trtype": "$TEST_TRANSPORT", 00:16:43.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:43.391 "adrfam": "ipv4", 00:16:43.391 "trsvcid": "$NVMF_PORT", 00:16:43.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:43.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:43.391 "hdgst": ${hdgst:-false}, 00:16:43.391 "ddgst": ${ddgst:-false} 00:16:43.391 }, 00:16:43.391 "method": "bdev_nvme_attach_controller" 00:16:43.391 } 00:16:43.391 EOF 00:16:43.391 )") 00:16:43.391 20:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:43.391 20:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:43.391 20:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:43.391 20:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:43.391 "params": { 00:16:43.391 "name": "Nvme0", 00:16:43.391 "trtype": "tcp", 00:16:43.391 "traddr": "10.0.0.2", 00:16:43.391 "adrfam": "ipv4", 00:16:43.391 "trsvcid": "4420", 00:16:43.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:43.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:43.391 "hdgst": false, 00:16:43.391 "ddgst": false 00:16:43.391 }, 00:16:43.391 "method": "bdev_nvme_attach_controller" 00:16:43.391 }' 00:16:43.391 [2024-07-15 20:22:21.809988] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:16:43.391 [2024-07-15 20:22:21.810063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4025939 ] 00:16:43.391 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.391 [2024-07-15 20:22:21.869101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.649 [2024-07-15 20:22:21.956959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.908 Running I/O for 1 seconds... 00:16:44.849 00:16:44.849 Latency(us) 00:16:44.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.849 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.849 Verification LBA range: start 0x0 length 0x400 00:16:44.849 Nvme0n1 : 1.03 1185.50 74.09 0.00 0.00 53212.19 13107.20 45632.47 00:16:44.849 =================================================================================================================== 00:16:44.849 Total : 1185.50 74.09 0.00 0.00 53212.19 13107.20 45632.47 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:45.107 rmmod nvme_tcp 00:16:45.107 rmmod nvme_fabrics 00:16:45.107 rmmod nvme_keyring 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 4025623 ']' 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 4025623 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 4025623 ']' 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 4025623 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4025623 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4025623' 00:16:45.107 killing process with pid 4025623 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 4025623 00:16:45.107 20:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 4025623 00:16:45.365 [2024-07-15 20:22:23.797261] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:45.365 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:45.365 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:45.365 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:45.365 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.365 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:45.365 20:22:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.365 20:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.365 20:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.905 20:22:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:47.905 20:22:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:47.905 00:16:47.905 real 0m8.361s 00:16:47.905 user 0m19.108s 00:16:47.905 sys 0m2.544s 00:16:47.905 20:22:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:47.905 20:22:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:47.906 ************************************ 00:16:47.906 END TEST nvmf_host_management 00:16:47.906 ************************************ 00:16:47.906 20:22:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:47.906 20:22:25 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:47.906 20:22:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:47.906 20:22:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.906 20:22:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:47.906 ************************************ 00:16:47.906 START TEST nvmf_lvol 00:16:47.906 ************************************ 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:47.906 * Looking for test storage... 00:16:47.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:47.906 20:22:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:49.808 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:49.809 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:49.809 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:49.809 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:49.809 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.809 20:22:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:49.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:16:49.809 00:16:49.809 --- 10.0.0.2 ping statistics --- 00:16:49.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.809 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:16:49.809 00:16:49.809 --- 10.0.0.1 ping statistics --- 00:16:49.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.809 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=4028138 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 4028138 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 4028138 ']' 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:49.809 20:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:49.809 [2024-07-15 20:22:28.154058] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:16:49.809 [2024-07-15 20:22:28.154136] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.809 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.809 [2024-07-15 20:22:28.225166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:49.809 [2024-07-15 20:22:28.317546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.809 [2024-07-15 20:22:28.317606] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.809 [2024-07-15 20:22:28.317633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.809 [2024-07-15 20:22:28.317648] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.810 [2024-07-15 20:22:28.317660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.810 [2024-07-15 20:22:28.317727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.810 [2024-07-15 20:22:28.317797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.810 [2024-07-15 20:22:28.317800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.068 20:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.068 20:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:50.068 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:50.068 20:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:50.068 20:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:50.068 20:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.068 20:22:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:50.325 [2024-07-15 20:22:28.661770] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.326 20:22:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.583 20:22:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:50.583 20:22:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.840 20:22:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:50.840 20:22:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:51.159 20:22:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:51.417 20:22:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=01f3fa7c-af20-41ec-9c36-7c8e709d0362 00:16:51.417 20:22:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 01f3fa7c-af20-41ec-9c36-7c8e709d0362 lvol 20 00:16:51.675 20:22:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ea1eeaa6-3b6f-49a4-840b-99e4da7e8d19 00:16:51.675 20:22:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:51.933 20:22:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ea1eeaa6-3b6f-49a4-840b-99e4da7e8d19 00:16:52.190 20:22:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:52.449 [2024-07-15 20:22:30.722368] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.449 20:22:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:52.708 20:22:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4028442 00:16:52.708 20:22:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:52.708 20:22:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:52.708 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.644 20:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ea1eeaa6-3b6f-49a4-840b-99e4da7e8d19 MY_SNAPSHOT 00:16:53.902 20:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a04b7921-a5cf-4dc4-b8f2-9e27ccf8fa57 00:16:53.902 20:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ea1eeaa6-3b6f-49a4-840b-99e4da7e8d19 30 00:16:54.160 20:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a04b7921-a5cf-4dc4-b8f2-9e27ccf8fa57 MY_CLONE 00:16:54.418 20:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bae574b6-b8be-48f8-b41b-ac05109447e6 00:16:54.418 20:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bae574b6-b8be-48f8-b41b-ac05109447e6 00:16:54.986 20:22:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4028442 00:17:03.123 Initializing NVMe Controllers 00:17:03.123 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:03.123 Controller IO queue size 128, less than required. 00:17:03.123 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:03.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:03.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:03.123 Initialization complete. Launching workers. 00:17:03.123 ======================================================== 00:17:03.123 Latency(us) 00:17:03.123 Device Information : IOPS MiB/s Average min max 00:17:03.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10482.54 40.95 12220.85 1804.32 73451.31 00:17:03.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10491.94 40.98 12203.43 2389.73 83702.31 00:17:03.123 ======================================================== 00:17:03.123 Total : 20974.48 81.93 12212.13 1804.32 83702.31 00:17:03.123 00:17:03.123 20:22:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:03.381 20:22:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ea1eeaa6-3b6f-49a4-840b-99e4da7e8d19 00:17:03.639 20:22:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 01f3fa7c-af20-41ec-9c36-7c8e709d0362 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.899 rmmod nvme_tcp 00:17:03.899 rmmod nvme_fabrics 00:17:03.899 rmmod nvme_keyring 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 4028138 ']' 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 4028138 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 4028138 ']' 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 4028138 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4028138 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4028138' 00:17:03.899 killing process with pid 4028138 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 4028138 00:17:03.899 20:22:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 4028138 00:17:04.158 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.158 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.158 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.158 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.158 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.158 20:22:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.158 20:22:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.158 20:22:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.698 00:17:06.698 real 0m18.697s 00:17:06.698 user 1m3.951s 00:17:06.698 sys 0m5.545s 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:06.698 ************************************ 00:17:06.698 END TEST nvmf_lvol 00:17:06.698 ************************************ 00:17:06.698 20:22:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:06.698 20:22:44 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:06.698 20:22:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:06.698 20:22:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.698 20:22:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:06.698 ************************************ 00:17:06.698 START TEST nvmf_lvs_grow 00:17:06.698 ************************************ 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:06.698 * Looking for test storage... 00:17:06.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.698 20:22:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:06.699 20:22:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:08.600 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.600 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:08.600 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:08.601 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.601 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.601 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.601 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:08.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:17:08.601 00:17:08.601 --- 10.0.0.2 ping statistics --- 00:17:08.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.601 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:17:08.601 00:17:08.601 --- 10.0.0.1 ping statistics --- 00:17:08.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.601 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=4031701 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 4031701 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 4031701 ']' 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.601 20:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:08.601 [2024-07-15 20:22:46.909748] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:17:08.601 [2024-07-15 20:22:46.909818] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.601 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.601 [2024-07-15 20:22:46.971685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.601 [2024-07-15 20:22:47.054977] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.601 [2024-07-15 20:22:47.055029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.601 [2024-07-15 20:22:47.055054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.601 [2024-07-15 20:22:47.055065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.602 [2024-07-15 20:22:47.055077] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.602 [2024-07-15 20:22:47.055102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.859 20:22:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.859 20:22:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:08.859 20:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:08.860 20:22:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:08.860 20:22:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:08.860 20:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.860 20:22:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:09.117 [2024-07-15 20:22:47.414619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:09.117 ************************************ 00:17:09.117 START TEST lvs_grow_clean 00:17:09.117 ************************************ 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.117 20:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:09.376 20:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:09.376 20:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:09.634 20:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=126ef356-0ceb-458d-86e2-7f3e3a13b8d9 00:17:09.634 20:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126ef356-0ceb-458d-86e2-7f3e3a13b8d9 00:17:09.634 20:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:09.892 20:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:09.892 20:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:09.892 20:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 126ef356-0ceb-458d-86e2-7f3e3a13b8d9 lvol 150 00:17:10.149 20:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=17304334-3296-4ab3-847f-31dc16519b27 00:17:10.149 20:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:10.149 20:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:10.416 [2024-07-15 20:22:48.851296] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:10.416 [2024-07-15 20:22:48.851386] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:10.416 true 00:17:10.416 20:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126ef356-0ceb-458d-86e2-7f3e3a13b8d9 00:17:10.416 20:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:10.682 20:22:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:10.682 20:22:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:10.940 20:22:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 17304334-3296-4ab3-847f-31dc16519b27 00:17:11.197 20:22:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:11.454 [2024-07-15 20:22:49.858428] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.454 20:22:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:11.711 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4032136 00:17:11.711 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:11.711 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4032136 /var/tmp/bdevperf.sock 00:17:11.711 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 4032136 ']' 00:17:11.711 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:11.711 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.711 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.712 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.712 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.712 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:11.712 [2024-07-15 20:22:50.194566] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:17:11.712 [2024-07-15 20:22:50.194653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4032136 ] 00:17:11.712 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.969 [2024-07-15 20:22:50.254069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.969 [2024-07-15 20:22:50.343455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.969 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.969 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:11.969 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:12.533 Nvme0n1 00:17:12.533 20:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:12.790 [ 00:17:12.790 { 00:17:12.790 "name": "Nvme0n1", 00:17:12.790 "aliases": [ 00:17:12.790 "17304334-3296-4ab3-847f-31dc16519b27" 00:17:12.790 ], 00:17:12.790 "product_name": "NVMe disk", 00:17:12.790 "block_size": 4096, 00:17:12.790 "num_blocks": 38912, 00:17:12.790 "uuid": "17304334-3296-4ab3-847f-31dc16519b27", 00:17:12.790 "assigned_rate_limits": { 00:17:12.790 "rw_ios_per_sec": 0, 00:17:12.790 "rw_mbytes_per_sec": 0, 00:17:12.790 "r_mbytes_per_sec": 0, 00:17:12.790 "w_mbytes_per_sec": 0 00:17:12.790 }, 00:17:12.790 "claimed": false, 00:17:12.790 "zoned": false, 00:17:12.790 "supported_io_types": { 00:17:12.790 "read": true, 00:17:12.790 "write": true, 00:17:12.790 "unmap": true, 00:17:12.790 "flush": true, 00:17:12.790 "reset": true, 00:17:12.790 "nvme_admin": true, 00:17:12.790 "nvme_io": true, 00:17:12.790 "nvme_io_md": false, 00:17:12.790 "write_zeroes": true, 00:17:12.790 "zcopy": false, 00:17:12.790 "get_zone_info": false, 00:17:12.790 "zone_management": false, 00:17:12.790 "zone_append": false, 00:17:12.790 "compare": true, 00:17:12.790 "compare_and_write": true, 00:17:12.790 "abort": true, 00:17:12.790 "seek_hole": false, 00:17:12.790 "seek_data": false, 00:17:12.790 "copy": true, 00:17:12.790 "nvme_iov_md": false 00:17:12.790 }, 00:17:12.790 "memory_domains": [ 00:17:12.790 { 00:17:12.790 "dma_device_id": "system", 00:17:12.790 "dma_device_type": 1 00:17:12.790 } 00:17:12.790 ], 00:17:12.790 "driver_specific": { 00:17:12.790 "nvme": [ 00:17:12.790 { 00:17:12.790 "trid": { 00:17:12.790 "trtype": "TCP", 00:17:12.790 "adrfam": "IPv4", 00:17:12.790 "traddr": "10.0.0.2", 00:17:12.790 "trsvcid": "4420", 00:17:12.790 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:12.790 }, 00:17:12.790 "ctrlr_data": { 00:17:12.790 "cntlid": 1, 00:17:12.790 "vendor_id": "0x8086", 00:17:12.790 "model_number": "SPDK bdev Controller", 00:17:12.790 "serial_number": "SPDK0", 00:17:12.790 "firmware_revision": "24.09", 00:17:12.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:12.790 "oacs": { 00:17:12.790 "security": 0, 00:17:12.790 "format": 0, 00:17:12.790 "firmware": 0, 00:17:12.790 "ns_manage": 0 00:17:12.790 }, 00:17:12.790 "multi_ctrlr": true, 00:17:12.790 "ana_reporting": false 00:17:12.790 }, 00:17:12.790 "vs": { 00:17:12.790 "nvme_version": "1.3" 00:17:12.790 }, 00:17:12.790 "ns_data": { 00:17:12.790 "id": 1, 00:17:12.790 "can_share": true 00:17:12.790 } 00:17:12.790 } 00:17:12.790 ], 00:17:12.790 "mp_policy": "active_passive" 00:17:12.790 } 00:17:12.790 } 00:17:12.790 ] 00:17:12.790 20:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4032267 00:17:12.790 20:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:12.790 20:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:12.790 Running I/O for 10 seconds... 00:17:14.162 Latency(us) 00:17:14.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.162 Nvme0n1 : 1.00 14141.00 55.24 0.00 0.00 0.00 0.00 0.00 00:17:14.162 =================================================================================================================== 00:17:14.162 Total : 14141.00 55.24 0.00 0.00 0.00 0.00 0.00 00:17:14.162 00:17:14.728 20:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 126ef356-0ceb-458d-86e2-7f3e3a13b8d9 00:17:14.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.985 Nvme0n1 : 2.00 14270.50 55.74 0.00 0.00 0.00 0.00 0.00 00:17:14.985 =================================================================================================================== 00:17:14.985 Total : 14270.50 55.74 0.00 0.00 0.00 0.00 0.00 00:17:14.985 00:17:14.985 true 00:17:14.985 20:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126ef356-0ceb-458d-86e2-7f3e3a13b8d9 00:17:14.985 20:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:15.243 20:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:15.243 20:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:15.243 20:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4032267 00:17:15.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.809 Nvme0n1 : 3.00 14355.33 56.08 0.00 0.00 0.00 0.00 0.00 00:17:15.809 =================================================================================================================== 00:17:15.809 Total : 14355.33 56.08 0.00 0.00 0.00 0.00 0.00 00:17:15.809 00:17:17.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.182 Nvme0n1 : 4.00 14430.00 56.37 0.00 0.00 0.00 0.00 0.00 00:17:17.183 =================================================================================================================== 00:17:17.183 Total : 14430.00 56.37 0.00 0.00 0.00 0.00 0.00 00:17:17.183 00:17:18.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.131 Nvme0n1 : 5.00 14488.00 56.59 0.00 0.00 0.00 0.00 0.00 00:17:18.131 =================================================================================================================== 00:17:18.131 Total : 14488.00 56.59 0.00 0.00 0.00 0.00 0.00 00:17:18.131 00:17:19.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.065 Nvme0n1 : 6.00 14526.50 56.74 0.00 0.00 0.00 0.00 0.00 00:17:19.065 =================================================================================================================== 00:17:19.065 Total : 14526.50 56.74 0.00 0.00 0.00 0.00 0.00 00:17:19.065 00:17:20.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.000 Nvme0n1 : 7.00 14572.14 56.92 0.00 0.00 0.00 0.00 0.00 00:17:20.000 =================================================================================================================== 00:17:20.000 Total : 14572.14 56.92 0.00 0.00 0.00 0.00 0.00 00:17:20.000 00:17:20.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.935 Nvme0n1 : 8.00 14600.62 57.03 0.00 0.00 0.00 0.00 0.00 00:17:20.935 =================================================================================================================== 00:17:20.935 Total : 14600.62 57.03 0.00 0.00 0.00 0.00 0.00 00:17:20.935 00:17:21.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.870 Nvme0n1 : 9.00 14626.33 57.13 0.00 0.00 0.00 0.00 0.00 00:17:21.870 =================================================================================================================== 00:17:21.870 Total : 14626.33 57.13 0.00 0.00 0.00 0.00 0.00 00:17:21.870 00:17:23.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.244 Nvme0n1 : 10.00 14643.80 57.20 0.00 0.00 0.00 0.00 0.00 00:17:23.244 =================================================================================================================== 00:17:23.244 Total : 14643.80 57.20 0.00 0.00 0.00 0.00 0.00 00:17:23.244 00:17:23.244 00:17:23.244 Latency(us) 00:17:23.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.244 Nvme0n1 : 10.01 14642.54 57.20 0.00 0.00 8735.67 2184.53 16602.45 00:17:23.244 =================================================================================================================== 00:17:23.244 Total : 14642.54 57.20 0.00 0.00 8735.67 2184.53 16602.45 00:17:23.244 0 00:17:23.244 20:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4032136 00:17:23.244 20:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 4032136 ']' 00:17:23.244 20:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 4032136 00:17:23.244 20:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:23.244 20:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.244 20:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4032136 00:17:23.244 20:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:23.244 20:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:23.244 20:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4032136' 00:17:23.244 killing process with pid 4032136 00:17:23.244 20:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 4032136 00:17:23.244 Received shutdown signal, test time was about 10.000000 seconds 00:17:23.244 00:17:23.244 Latency(us) 00:17:23.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.244 =================================================================================================================== 00:17:23.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:23.244 20:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 4032136 00:17:23.244 20:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:23.502 20:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:23.760 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126ef356-0ceb-458d-86e2-7f3e3a13b8d9 00:17:23.760 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:24.018 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:24.018 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:24.018 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:24.276 [2024-07-15 20:23:02.708688] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:24.276 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126ef356-0ceb-458d-86e2-7f3e3a13b8d9 00:17:24.276 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:24.276 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126ef356-0ceb-458d-86e2-7f3e3a13b8d9 00:17:24.276 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.276 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.276 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.276 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.276 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.276 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.276 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.276 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:24.276 20:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126ef356-0ceb-458d-86e2-7f3e3a13b8d9 00:17:24.534 request: 00:17:24.534 { 00:17:24.534 "uuid": "126ef356-0ceb-458d-86e2-7f3e3a13b8d9", 00:17:24.534 "method": "bdev_lvol_get_lvstores", 00:17:24.534 "req_id": 1 00:17:24.534 } 00:17:24.534 Got JSON-RPC error response 00:17:24.534 response: 00:17:24.534 { 00:17:24.534 "code": -19, 00:17:24.534 "message": "No such device" 00:17:24.534 } 00:17:24.534 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:24.534 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:24.534 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:24.534 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:24.534 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:24.793 aio_bdev 00:17:24.793 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 17304334-3296-4ab3-847f-31dc16519b27 00:17:24.793 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=17304334-3296-4ab3-847f-31dc16519b27 00:17:24.793 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:24.793 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:24.793 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:24.793 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:24.793 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:25.394 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 17304334-3296-4ab3-847f-31dc16519b27 -t 2000 00:17:25.394 [ 00:17:25.394 { 00:17:25.394 "name": "17304334-3296-4ab3-847f-31dc16519b27", 00:17:25.394 "aliases": [ 00:17:25.394 "lvs/lvol" 00:17:25.394 ], 00:17:25.394 "product_name": "Logical Volume", 00:17:25.394 "block_size": 4096, 00:17:25.394 "num_blocks": 38912, 00:17:25.394 "uuid": "17304334-3296-4ab3-847f-31dc16519b27", 00:17:25.394 "assigned_rate_limits": { 00:17:25.394 "rw_ios_per_sec": 0, 00:17:25.394 "rw_mbytes_per_sec": 0, 00:17:25.394 "r_mbytes_per_sec": 0, 00:17:25.394 "w_mbytes_per_sec": 0 00:17:25.394 }, 00:17:25.394 "claimed": false, 00:17:25.394 "zoned": false, 00:17:25.394 "supported_io_types": { 00:17:25.394 "read": true, 00:17:25.394 "write": true, 00:17:25.394 "unmap": true, 00:17:25.394 "flush": false, 00:17:25.394 "reset": true, 00:17:25.394 "nvme_admin": false, 00:17:25.394 "nvme_io": false, 00:17:25.394 "nvme_io_md": false, 00:17:25.394 "write_zeroes": true, 00:17:25.394 "zcopy": false, 00:17:25.394 "get_zone_info": false, 00:17:25.394 "zone_management": false, 00:17:25.394 "zone_append": false, 00:17:25.394 "compare": false, 00:17:25.394 "compare_and_write": false, 00:17:25.394 "abort": false, 00:17:25.394 "seek_hole": true, 00:17:25.394 "seek_data": true, 00:17:25.394 "copy": false, 00:17:25.394 "nvme_iov_md": false 00:17:25.394 }, 00:17:25.394 "driver_specific": { 00:17:25.394 "lvol": { 00:17:25.394 "lvol_store_uuid": "126ef356-0ceb-458d-86e2-7f3e3a13b8d9", 00:17:25.394 "base_bdev": "aio_bdev", 00:17:25.394 "thin_provision": false, 00:17:25.394 "num_allocated_clusters": 38, 00:17:25.394 "snapshot": false, 00:17:25.394 "clone": false, 00:17:25.394 "esnap_clone": false 00:17:25.394 } 00:17:25.394 } 00:17:25.394 } 00:17:25.394 ] 00:17:25.394 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:25.394 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126ef356-0ceb-458d-86e2-7f3e3a13b8d9 00:17:25.394 20:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:25.652 20:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:25.652 20:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126ef356-0ceb-458d-86e2-7f3e3a13b8d9 00:17:25.652 20:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:25.910 20:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:25.910 20:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 17304334-3296-4ab3-847f-31dc16519b27 00:17:26.168 20:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 126ef356-0ceb-458d-86e2-7f3e3a13b8d9 00:17:26.426 20:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:26.685 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:26.685 00:17:26.685 real 0m17.742s 00:17:26.685 user 0m17.109s 00:17:26.685 sys 0m1.932s 00:17:26.685 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:26.685 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:26.685 ************************************ 00:17:26.685 END TEST lvs_grow_clean 00:17:26.685 ************************************ 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:26.944 ************************************ 00:17:26.944 START TEST lvs_grow_dirty 00:17:26.944 ************************************ 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:26.944 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:27.203 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:27.203 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:27.463 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1a874243-d3af-410d-8177-aa058c87b04b 00:17:27.463 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a874243-d3af-410d-8177-aa058c87b04b 00:17:27.463 20:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:27.722 20:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:27.722 20:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:27.722 20:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1a874243-d3af-410d-8177-aa058c87b04b lvol 150 00:17:27.980 20:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ceeab295-60a4-4bd1-a446-61498ecb56fa 00:17:27.980 20:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:27.980 20:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:28.248 [2024-07-15 20:23:06.588165] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:28.248 [2024-07-15 20:23:06.588261] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:28.248 true 00:17:28.248 20:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a874243-d3af-410d-8177-aa058c87b04b 00:17:28.248 20:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:28.511 20:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:28.511 20:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:28.769 20:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ceeab295-60a4-4bd1-a446-61498ecb56fa 00:17:29.027 20:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:29.285 [2024-07-15 20:23:07.683488] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.285 20:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:29.543 20:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4034304 00:17:29.543 20:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:29.543 20:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:29.543 20:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4034304 /var/tmp/bdevperf.sock 00:17:29.543 20:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4034304 ']' 00:17:29.543 20:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.543 20:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.543 20:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.544 20:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.544 20:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:29.544 [2024-07-15 20:23:08.025397] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:17:29.544 [2024-07-15 20:23:08.025476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034304 ] 00:17:29.544 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.801 [2024-07-15 20:23:08.086478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.801 [2024-07-15 20:23:08.176618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.801 20:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.801 20:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:29.801 20:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:30.369 Nvme0n1 00:17:30.369 20:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:30.629 [ 00:17:30.629 { 00:17:30.629 "name": "Nvme0n1", 00:17:30.629 "aliases": [ 00:17:30.629 "ceeab295-60a4-4bd1-a446-61498ecb56fa" 00:17:30.629 ], 00:17:30.629 "product_name": "NVMe disk", 00:17:30.629 "block_size": 4096, 00:17:30.629 "num_blocks": 38912, 00:17:30.629 "uuid": "ceeab295-60a4-4bd1-a446-61498ecb56fa", 00:17:30.629 "assigned_rate_limits": { 00:17:30.629 "rw_ios_per_sec": 0, 00:17:30.629 "rw_mbytes_per_sec": 0, 00:17:30.629 "r_mbytes_per_sec": 0, 00:17:30.629 "w_mbytes_per_sec": 0 00:17:30.629 }, 00:17:30.629 "claimed": false, 00:17:30.629 "zoned": false, 00:17:30.629 "supported_io_types": { 00:17:30.629 "read": true, 00:17:30.629 "write": true, 00:17:30.629 "unmap": true, 00:17:30.629 "flush": true, 00:17:30.629 "reset": true, 00:17:30.629 "nvme_admin": true, 00:17:30.629 "nvme_io": true, 00:17:30.629 "nvme_io_md": false, 00:17:30.629 "write_zeroes": true, 00:17:30.629 "zcopy": false, 00:17:30.629 "get_zone_info": false, 00:17:30.629 "zone_management": false, 00:17:30.629 "zone_append": false, 00:17:30.629 "compare": true, 00:17:30.629 "compare_and_write": true, 00:17:30.629 "abort": true, 00:17:30.629 "seek_hole": false, 00:17:30.629 "seek_data": false, 00:17:30.629 "copy": true, 00:17:30.629 "nvme_iov_md": false 00:17:30.629 }, 00:17:30.629 "memory_domains": [ 00:17:30.629 { 00:17:30.629 "dma_device_id": "system", 00:17:30.629 "dma_device_type": 1 00:17:30.629 } 00:17:30.629 ], 00:17:30.629 "driver_specific": { 00:17:30.629 "nvme": [ 00:17:30.629 { 00:17:30.629 "trid": { 00:17:30.629 "trtype": "TCP", 00:17:30.629 "adrfam": "IPv4", 00:17:30.629 "traddr": "10.0.0.2", 00:17:30.629 "trsvcid": "4420", 00:17:30.629 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:30.629 }, 00:17:30.629 "ctrlr_data": { 00:17:30.629 "cntlid": 1, 00:17:30.630 "vendor_id": "0x8086", 00:17:30.630 "model_number": "SPDK bdev Controller", 00:17:30.630 "serial_number": "SPDK0", 00:17:30.630 "firmware_revision": "24.09", 00:17:30.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:30.630 "oacs": { 00:17:30.630 "security": 0, 00:17:30.630 "format": 0, 00:17:30.630 "firmware": 0, 00:17:30.630 "ns_manage": 0 00:17:30.630 }, 00:17:30.630 "multi_ctrlr": true, 00:17:30.630 "ana_reporting": false 00:17:30.630 }, 00:17:30.630 "vs": { 00:17:30.630 "nvme_version": "1.3" 00:17:30.630 }, 00:17:30.630 "ns_data": { 00:17:30.630 "id": 1, 00:17:30.630 "can_share": true 00:17:30.630 } 00:17:30.630 } 00:17:30.630 ], 00:17:30.630 "mp_policy": "active_passive" 00:17:30.630 } 00:17:30.630 } 00:17:30.630 ] 00:17:30.630 20:23:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4034440 00:17:30.630 20:23:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:30.630 20:23:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:30.630 Running I/O for 10 seconds... 00:17:32.018 Latency(us) 00:17:32.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.018 Nvme0n1 : 1.00 14356.00 56.08 0.00 0.00 0.00 0.00 0.00 00:17:32.018 =================================================================================================================== 00:17:32.018 Total : 14356.00 56.08 0.00 0.00 0.00 0.00 0.00 00:17:32.018 00:17:32.583 20:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1a874243-d3af-410d-8177-aa058c87b04b 00:17:32.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.840 Nvme0n1 : 2.00 14593.00 57.00 0.00 0.00 0.00 0.00 0.00 00:17:32.840 =================================================================================================================== 00:17:32.840 Total : 14593.00 57.00 0.00 0.00 0.00 0.00 0.00 00:17:32.840 00:17:32.840 true 00:17:32.840 20:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a874243-d3af-410d-8177-aa058c87b04b 00:17:32.840 20:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:33.099 20:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:33.099 20:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:33.099 20:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4034440 00:17:33.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.664 Nvme0n1 : 3.00 14614.00 57.09 0.00 0.00 0.00 0.00 0.00 00:17:33.664 =================================================================================================================== 00:17:33.664 Total : 14614.00 57.09 0.00 0.00 0.00 0.00 0.00 00:17:33.664 00:17:35.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.040 Nvme0n1 : 4.00 14768.50 57.69 0.00 0.00 0.00 0.00 0.00 00:17:35.040 =================================================================================================================== 00:17:35.040 Total : 14768.50 57.69 0.00 0.00 0.00 0.00 0.00 00:17:35.040 00:17:35.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.980 Nvme0n1 : 5.00 14810.00 57.85 0.00 0.00 0.00 0.00 0.00 00:17:35.980 =================================================================================================================== 00:17:35.980 Total : 14810.00 57.85 0.00 0.00 0.00 0.00 0.00 00:17:35.980 00:17:36.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.918 Nvme0n1 : 6.00 14869.67 58.08 0.00 0.00 0.00 0.00 0.00 00:17:36.918 =================================================================================================================== 00:17:36.918 Total : 14869.67 58.08 0.00 0.00 0.00 0.00 0.00 00:17:36.918 00:17:37.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.858 Nvme0n1 : 7.00 14958.00 58.43 0.00 0.00 0.00 0.00 0.00 00:17:37.858 =================================================================================================================== 00:17:37.858 Total : 14958.00 58.43 0.00 0.00 0.00 0.00 0.00 00:17:37.858 00:17:38.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.797 Nvme0n1 : 8.00 14944.12 58.38 0.00 0.00 0.00 0.00 0.00 00:17:38.797 =================================================================================================================== 00:17:38.797 Total : 14944.12 58.38 0.00 0.00 0.00 0.00 0.00 00:17:38.797 00:17:39.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.741 Nvme0n1 : 9.00 15011.56 58.64 0.00 0.00 0.00 0.00 0.00 00:17:39.741 =================================================================================================================== 00:17:39.741 Total : 15011.56 58.64 0.00 0.00 0.00 0.00 0.00 00:17:39.741 00:17:40.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.680 Nvme0n1 : 10.00 15001.50 58.60 0.00 0.00 0.00 0.00 0.00 00:17:40.680 =================================================================================================================== 00:17:40.680 Total : 15001.50 58.60 0.00 0.00 0.00 0.00 0.00 00:17:40.680 00:17:40.680 00:17:40.680 Latency(us) 00:17:40.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.680 Nvme0n1 : 10.01 15001.00 58.60 0.00 0.00 8526.70 4757.43 15534.46 00:17:40.680 =================================================================================================================== 00:17:40.680 Total : 15001.00 58.60 0.00 0.00 8526.70 4757.43 15534.46 00:17:40.680 0 00:17:40.680 20:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4034304 00:17:40.680 20:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 4034304 ']' 00:17:40.680 20:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 4034304 00:17:40.680 20:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:40.680 20:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:40.680 20:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4034304 00:17:40.938 20:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:40.938 20:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:40.938 20:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4034304' 00:17:40.938 killing process with pid 4034304 00:17:40.938 20:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 4034304 00:17:40.938 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.938 00:17:40.938 Latency(us) 00:17:40.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.938 =================================================================================================================== 00:17:40.938 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.938 20:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 4034304 00:17:40.938 20:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:41.506 20:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:41.766 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a874243-d3af-410d-8177-aa058c87b04b 00:17:41.766 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4031701 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4031701 00:17:42.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4031701 Killed "${NVMF_APP[@]}" "$@" 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=4035763 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 4035763 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4035763 ']' 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:42.027 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:42.027 [2024-07-15 20:23:20.385867] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:17:42.027 [2024-07-15 20:23:20.385985] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.027 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.027 [2024-07-15 20:23:20.453515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.027 [2024-07-15 20:23:20.542948] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.027 [2024-07-15 20:23:20.543002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.027 [2024-07-15 20:23:20.543015] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.027 [2024-07-15 20:23:20.543026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.027 [2024-07-15 20:23:20.543037] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.027 [2024-07-15 20:23:20.543065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.286 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.286 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:42.286 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:42.286 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:42.286 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:42.286 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.286 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:42.545 [2024-07-15 20:23:20.958336] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:42.545 [2024-07-15 20:23:20.958468] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:42.545 [2024-07-15 20:23:20.958514] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:42.545 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:42.545 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ceeab295-60a4-4bd1-a446-61498ecb56fa 00:17:42.545 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=ceeab295-60a4-4bd1-a446-61498ecb56fa 00:17:42.545 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:42.545 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:42.545 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:42.545 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:42.545 20:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:42.804 20:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ceeab295-60a4-4bd1-a446-61498ecb56fa -t 2000 00:17:43.063 [ 00:17:43.063 { 00:17:43.063 "name": "ceeab295-60a4-4bd1-a446-61498ecb56fa", 00:17:43.063 "aliases": [ 00:17:43.063 "lvs/lvol" 00:17:43.063 ], 00:17:43.063 "product_name": "Logical Volume", 00:17:43.063 "block_size": 4096, 00:17:43.063 "num_blocks": 38912, 00:17:43.063 "uuid": "ceeab295-60a4-4bd1-a446-61498ecb56fa", 00:17:43.063 "assigned_rate_limits": { 00:17:43.063 "rw_ios_per_sec": 0, 00:17:43.063 "rw_mbytes_per_sec": 0, 00:17:43.063 "r_mbytes_per_sec": 0, 00:17:43.063 "w_mbytes_per_sec": 0 00:17:43.063 }, 00:17:43.063 "claimed": false, 00:17:43.063 "zoned": false, 00:17:43.063 "supported_io_types": { 00:17:43.063 "read": true, 00:17:43.063 "write": true, 00:17:43.063 "unmap": true, 00:17:43.063 "flush": false, 00:17:43.063 "reset": true, 00:17:43.063 "nvme_admin": false, 00:17:43.063 "nvme_io": false, 00:17:43.063 "nvme_io_md": false, 00:17:43.063 "write_zeroes": true, 00:17:43.063 "zcopy": false, 00:17:43.063 "get_zone_info": false, 00:17:43.063 "zone_management": false, 00:17:43.063 "zone_append": false, 00:17:43.063 "compare": false, 00:17:43.063 "compare_and_write": false, 00:17:43.063 "abort": false, 00:17:43.063 "seek_hole": true, 00:17:43.063 "seek_data": true, 00:17:43.063 "copy": false, 00:17:43.063 "nvme_iov_md": false 00:17:43.063 }, 00:17:43.063 "driver_specific": { 00:17:43.063 "lvol": { 00:17:43.063 "lvol_store_uuid": "1a874243-d3af-410d-8177-aa058c87b04b", 00:17:43.063 "base_bdev": "aio_bdev", 00:17:43.063 "thin_provision": false, 00:17:43.063 "num_allocated_clusters": 38, 00:17:43.063 "snapshot": false, 00:17:43.063 "clone": false, 00:17:43.063 "esnap_clone": false 00:17:43.063 } 00:17:43.063 } 00:17:43.063 } 00:17:43.063 ] 00:17:43.063 20:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:43.063 20:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a874243-d3af-410d-8177-aa058c87b04b 00:17:43.063 20:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:43.323 20:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:43.323 20:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a874243-d3af-410d-8177-aa058c87b04b 00:17:43.323 20:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:43.582 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:43.582 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:43.841 [2024-07-15 20:23:22.311835] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:43.841 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a874243-d3af-410d-8177-aa058c87b04b 00:17:43.841 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:43.841 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a874243-d3af-410d-8177-aa058c87b04b 00:17:43.841 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.841 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.841 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.841 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.841 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.841 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.841 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.841 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:43.841 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a874243-d3af-410d-8177-aa058c87b04b 00:17:44.099 request: 00:17:44.099 { 00:17:44.099 "uuid": "1a874243-d3af-410d-8177-aa058c87b04b", 00:17:44.099 "method": "bdev_lvol_get_lvstores", 00:17:44.099 "req_id": 1 00:17:44.099 } 00:17:44.099 Got JSON-RPC error response 00:17:44.099 response: 00:17:44.099 { 00:17:44.099 "code": -19, 00:17:44.099 "message": "No such device" 00:17:44.099 } 00:17:44.099 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:44.099 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:44.099 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:44.099 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:44.099 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:44.666 aio_bdev 00:17:44.666 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ceeab295-60a4-4bd1-a446-61498ecb56fa 00:17:44.666 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=ceeab295-60a4-4bd1-a446-61498ecb56fa 00:17:44.666 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:44.666 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:44.666 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:44.666 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:44.666 20:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:44.924 20:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ceeab295-60a4-4bd1-a446-61498ecb56fa -t 2000 00:17:45.181 [ 00:17:45.181 { 00:17:45.181 "name": "ceeab295-60a4-4bd1-a446-61498ecb56fa", 00:17:45.181 "aliases": [ 00:17:45.181 "lvs/lvol" 00:17:45.181 ], 00:17:45.181 "product_name": "Logical Volume", 00:17:45.181 "block_size": 4096, 00:17:45.181 "num_blocks": 38912, 00:17:45.181 "uuid": "ceeab295-60a4-4bd1-a446-61498ecb56fa", 00:17:45.181 "assigned_rate_limits": { 00:17:45.181 "rw_ios_per_sec": 0, 00:17:45.181 "rw_mbytes_per_sec": 0, 00:17:45.181 "r_mbytes_per_sec": 0, 00:17:45.181 "w_mbytes_per_sec": 0 00:17:45.181 }, 00:17:45.181 "claimed": false, 00:17:45.181 "zoned": false, 00:17:45.181 "supported_io_types": { 00:17:45.181 "read": true, 00:17:45.181 "write": true, 00:17:45.181 "unmap": true, 00:17:45.181 "flush": false, 00:17:45.181 "reset": true, 00:17:45.181 "nvme_admin": false, 00:17:45.181 "nvme_io": false, 00:17:45.181 "nvme_io_md": false, 00:17:45.181 "write_zeroes": true, 00:17:45.181 "zcopy": false, 00:17:45.181 "get_zone_info": false, 00:17:45.181 "zone_management": false, 00:17:45.181 "zone_append": false, 00:17:45.181 "compare": false, 00:17:45.181 "compare_and_write": false, 00:17:45.181 "abort": false, 00:17:45.181 "seek_hole": true, 00:17:45.181 "seek_data": true, 00:17:45.181 "copy": false, 00:17:45.181 "nvme_iov_md": false 00:17:45.181 }, 00:17:45.181 "driver_specific": { 00:17:45.181 "lvol": { 00:17:45.181 "lvol_store_uuid": "1a874243-d3af-410d-8177-aa058c87b04b", 00:17:45.181 "base_bdev": "aio_bdev", 00:17:45.181 "thin_provision": false, 00:17:45.181 "num_allocated_clusters": 38, 00:17:45.181 "snapshot": false, 00:17:45.181 "clone": false, 00:17:45.181 "esnap_clone": false 00:17:45.181 } 00:17:45.181 } 00:17:45.181 } 00:17:45.181 ] 00:17:45.181 20:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:45.181 20:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a874243-d3af-410d-8177-aa058c87b04b 00:17:45.181 20:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:45.438 20:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:45.438 20:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a874243-d3af-410d-8177-aa058c87b04b 00:17:45.438 20:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:45.697 20:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:45.697 20:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ceeab295-60a4-4bd1-a446-61498ecb56fa 00:17:45.956 20:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1a874243-d3af-410d-8177-aa058c87b04b 00:17:46.213 20:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:46.213 20:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:46.472 00:17:46.472 real 0m19.504s 00:17:46.472 user 0m49.138s 00:17:46.472 sys 0m4.756s 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:46.472 ************************************ 00:17:46.472 END TEST lvs_grow_dirty 00:17:46.472 ************************************ 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:46.472 nvmf_trace.0 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:46.472 20:23:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:46.473 rmmod nvme_tcp 00:17:46.473 rmmod nvme_fabrics 00:17:46.473 rmmod nvme_keyring 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 4035763 ']' 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 4035763 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 4035763 ']' 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 4035763 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4035763 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4035763' 00:17:46.473 killing process with pid 4035763 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 4035763 00:17:46.473 20:23:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 4035763 00:17:46.731 20:23:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:46.731 20:23:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:46.731 20:23:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:46.731 20:23:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:46.731 20:23:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:46.731 20:23:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.731 20:23:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.731 20:23:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.265 20:23:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:49.265 00:17:49.265 real 0m42.533s 00:17:49.265 user 1m12.080s 00:17:49.265 sys 0m8.566s 00:17:49.265 20:23:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:49.265 20:23:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:49.265 ************************************ 00:17:49.265 END TEST nvmf_lvs_grow 00:17:49.265 ************************************ 00:17:49.265 20:23:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:49.265 20:23:27 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:49.265 20:23:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:49.265 20:23:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.265 20:23:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:49.265 ************************************ 00:17:49.265 START TEST nvmf_bdev_io_wait 00:17:49.265 ************************************ 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:49.265 * Looking for test storage... 00:17:49.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:49.265 20:23:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:51.162 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:51.162 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:51.162 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:51.162 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:51.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:17:51.162 00:17:51.162 --- 10.0.0.2 ping statistics --- 00:17:51.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.162 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:17:51.162 00:17:51.162 --- 10.0.0.1 ping statistics --- 00:17:51.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.162 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=4038283 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 4038283 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 4038283 ']' 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.162 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:51.162 [2024-07-15 20:23:29.531064] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:17:51.162 [2024-07-15 20:23:29.531150] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.162 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.162 [2024-07-15 20:23:29.600711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.419 [2024-07-15 20:23:29.695661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.419 [2024-07-15 20:23:29.695722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.419 [2024-07-15 20:23:29.695738] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.419 [2024-07-15 20:23:29.695751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.419 [2024-07-15 20:23:29.695762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.419 [2024-07-15 20:23:29.695850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.419 [2024-07-15 20:23:29.695913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.419 [2024-07-15 20:23:29.695942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.419 [2024-07-15 20:23:29.695945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:51.419 [2024-07-15 20:23:29.837075] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:51.419 Malloc0 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.419 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:51.420 [2024-07-15 20:23:29.902664] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4038311 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4038312 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4038315 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:51.420 { 00:17:51.420 "params": { 00:17:51.420 "name": "Nvme$subsystem", 00:17:51.420 "trtype": "$TEST_TRANSPORT", 00:17:51.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:51.420 "adrfam": "ipv4", 00:17:51.420 "trsvcid": "$NVMF_PORT", 00:17:51.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:51.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:51.420 "hdgst": ${hdgst:-false}, 00:17:51.420 "ddgst": ${ddgst:-false} 00:17:51.420 }, 00:17:51.420 "method": "bdev_nvme_attach_controller" 00:17:51.420 } 00:17:51.420 EOF 00:17:51.420 )") 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4038317 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:51.420 { 00:17:51.420 "params": { 00:17:51.420 "name": "Nvme$subsystem", 00:17:51.420 "trtype": "$TEST_TRANSPORT", 00:17:51.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:51.420 "adrfam": "ipv4", 00:17:51.420 "trsvcid": "$NVMF_PORT", 00:17:51.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:51.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:51.420 "hdgst": ${hdgst:-false}, 00:17:51.420 "ddgst": ${ddgst:-false} 00:17:51.420 }, 00:17:51.420 "method": "bdev_nvme_attach_controller" 00:17:51.420 } 00:17:51.420 EOF 00:17:51.420 )") 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:51.420 { 00:17:51.420 "params": { 00:17:51.420 "name": "Nvme$subsystem", 00:17:51.420 "trtype": "$TEST_TRANSPORT", 00:17:51.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:51.420 "adrfam": "ipv4", 00:17:51.420 "trsvcid": "$NVMF_PORT", 00:17:51.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:51.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:51.420 "hdgst": ${hdgst:-false}, 00:17:51.420 "ddgst": ${ddgst:-false} 00:17:51.420 }, 00:17:51.420 "method": "bdev_nvme_attach_controller" 00:17:51.420 } 00:17:51.420 EOF 00:17:51.420 )") 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:51.420 { 00:17:51.420 "params": { 00:17:51.420 "name": "Nvme$subsystem", 00:17:51.420 "trtype": "$TEST_TRANSPORT", 00:17:51.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:51.420 "adrfam": "ipv4", 00:17:51.420 "trsvcid": "$NVMF_PORT", 00:17:51.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:51.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:51.420 "hdgst": ${hdgst:-false}, 00:17:51.420 "ddgst": ${ddgst:-false} 00:17:51.420 }, 00:17:51.420 "method": "bdev_nvme_attach_controller" 00:17:51.420 } 00:17:51.420 EOF 00:17:51.420 )") 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4038311 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:51.420 "params": { 00:17:51.420 "name": "Nvme1", 00:17:51.420 "trtype": "tcp", 00:17:51.420 "traddr": "10.0.0.2", 00:17:51.420 "adrfam": "ipv4", 00:17:51.420 "trsvcid": "4420", 00:17:51.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:51.420 "hdgst": false, 00:17:51.420 "ddgst": false 00:17:51.420 }, 00:17:51.420 "method": "bdev_nvme_attach_controller" 00:17:51.420 }' 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:51.420 "params": { 00:17:51.420 "name": "Nvme1", 00:17:51.420 "trtype": "tcp", 00:17:51.420 "traddr": "10.0.0.2", 00:17:51.420 "adrfam": "ipv4", 00:17:51.420 "trsvcid": "4420", 00:17:51.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:51.420 "hdgst": false, 00:17:51.420 "ddgst": false 00:17:51.420 }, 00:17:51.420 "method": "bdev_nvme_attach_controller" 00:17:51.420 }' 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:51.420 "params": { 00:17:51.420 "name": "Nvme1", 00:17:51.420 "trtype": "tcp", 00:17:51.420 "traddr": "10.0.0.2", 00:17:51.420 "adrfam": "ipv4", 00:17:51.420 "trsvcid": "4420", 00:17:51.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:51.420 "hdgst": false, 00:17:51.420 "ddgst": false 00:17:51.420 }, 00:17:51.420 "method": "bdev_nvme_attach_controller" 00:17:51.420 }' 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:51.420 20:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:51.420 "params": { 00:17:51.420 "name": "Nvme1", 00:17:51.420 "trtype": "tcp", 00:17:51.420 "traddr": "10.0.0.2", 00:17:51.420 "adrfam": "ipv4", 00:17:51.420 "trsvcid": "4420", 00:17:51.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:51.420 "hdgst": false, 00:17:51.420 "ddgst": false 00:17:51.420 }, 00:17:51.420 "method": "bdev_nvme_attach_controller" 00:17:51.420 }' 00:17:51.677 [2024-07-15 20:23:29.950464] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:17:51.677 [2024-07-15 20:23:29.950476] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:17:51.677 [2024-07-15 20:23:29.950475] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:17:51.677 [2024-07-15 20:23:29.950475] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:17:51.677 [2024-07-15 20:23:29.950544] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:51.677 [2024-07-15 20:23:29.950551] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 20:23:29.950551] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 20:23:29.950551] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:51.677 --proc-type=auto ] 00:17:51.677 --proc-type=auto ] 00:17:51.677 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.677 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.677 [2024-07-15 20:23:30.126620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.677 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.677 [2024-07-15 20:23:30.200227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:51.936 [2024-07-15 20:23:30.220863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.936 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.936 [2024-07-15 20:23:30.297594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:51.936 [2024-07-15 20:23:30.349421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.936 [2024-07-15 20:23:30.404358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.936 [2024-07-15 20:23:30.428274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:52.197 [2024-07-15 20:23:30.475573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:17:52.197 Running I/O for 1 seconds... 00:17:52.197 Running I/O for 1 seconds... 00:17:52.197 Running I/O for 1 seconds... 00:17:52.455 Running I/O for 1 seconds... 00:17:53.394 00:17:53.394 Latency(us) 00:17:53.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.394 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:53.394 Nvme1n1 : 1.01 10630.17 41.52 0.00 0.00 11986.45 8301.23 19806.44 00:17:53.394 =================================================================================================================== 00:17:53.394 Total : 10630.17 41.52 0.00 0.00 11986.45 8301.23 19806.44 00:17:53.394 00:17:53.394 Latency(us) 00:17:53.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.394 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:53.394 Nvme1n1 : 1.01 9306.57 36.35 0.00 0.00 13691.10 7912.87 24758.04 00:17:53.394 =================================================================================================================== 00:17:53.394 Total : 9306.57 36.35 0.00 0.00 13691.10 7912.87 24758.04 00:17:53.394 00:17:53.394 Latency(us) 00:17:53.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.394 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:53.394 Nvme1n1 : 1.02 28013.41 109.43 0.00 0.00 4547.22 479.38 340204.66 00:17:53.394 =================================================================================================================== 00:17:53.394 Total : 28013.41 109.43 0.00 0.00 4547.22 479.38 340204.66 00:17:53.394 00:17:53.394 Latency(us) 00:17:53.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.395 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:53.395 Nvme1n1 : 1.01 8665.33 33.85 0.00 0.00 14711.62 7524.50 27573.67 00:17:53.395 =================================================================================================================== 00:17:53.395 Total : 8665.33 33.85 0.00 0.00 14711.62 7524.50 27573.67 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4038312 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4038315 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4038317 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.654 rmmod nvme_tcp 00:17:53.654 rmmod nvme_fabrics 00:17:53.654 rmmod nvme_keyring 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 4038283 ']' 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 4038283 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 4038283 ']' 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 4038283 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4038283 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:53.654 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4038283' 00:17:53.654 killing process with pid 4038283 00:17:53.655 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 4038283 00:17:53.655 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 4038283 00:17:53.914 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.914 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.914 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.914 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.914 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.914 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.914 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.914 20:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.500 20:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:56.500 00:17:56.500 real 0m7.185s 00:17:56.500 user 0m15.403s 00:17:56.500 sys 0m3.635s 00:17:56.500 20:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:56.500 20:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:56.500 ************************************ 00:17:56.500 END TEST nvmf_bdev_io_wait 00:17:56.500 ************************************ 00:17:56.500 20:23:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:56.500 20:23:34 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:56.500 20:23:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:56.500 20:23:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:56.500 20:23:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:56.500 ************************************ 00:17:56.500 START TEST nvmf_queue_depth 00:17:56.500 ************************************ 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:56.500 * Looking for test storage... 00:17:56.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:56.500 20:23:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.408 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:58.409 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:58.409 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:58.409 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:58.409 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:58.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:17:58.409 00:17:58.409 --- 10.0.0.2 ping statistics --- 00:17:58.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.409 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:58.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:17:58.409 00:17:58.409 --- 10.0.0.1 ping statistics --- 00:17:58.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.409 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=4040532 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 4040532 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 4040532 ']' 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:58.409 [2024-07-15 20:23:36.646760] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:17:58.409 [2024-07-15 20:23:36.646846] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.409 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.409 [2024-07-15 20:23:36.713359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.409 [2024-07-15 20:23:36.805586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.409 [2024-07-15 20:23:36.805641] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.409 [2024-07-15 20:23:36.805658] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.409 [2024-07-15 20:23:36.805672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.409 [2024-07-15 20:23:36.805684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.409 [2024-07-15 20:23:36.805712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:58.409 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:58.669 20:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.669 20:23:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:58.669 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.669 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:58.669 [2024-07-15 20:23:36.958387] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.669 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.669 20:23:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:58.669 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.669 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:58.669 Malloc0 00:17:58.669 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.669 20:23:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:58.669 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.669 20:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:58.669 [2024-07-15 20:23:37.019493] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4040561 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4040561 /var/tmp/bdevperf.sock 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 4040561 ']' 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.669 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:58.669 [2024-07-15 20:23:37.070864] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:17:58.669 [2024-07-15 20:23:37.070959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040561 ] 00:17:58.669 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.669 [2024-07-15 20:23:37.139122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.929 [2024-07-15 20:23:37.235149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.929 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.929 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:58.929 20:23:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:58.929 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.929 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:59.189 NVMe0n1 00:17:59.189 20:23:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.189 20:23:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:59.189 Running I/O for 10 seconds... 00:18:11.406 00:18:11.406 Latency(us) 00:18:11.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.407 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:11.407 Verification LBA range: start 0x0 length 0x4000 00:18:11.407 NVMe0n1 : 10.08 8363.01 32.67 0.00 0.00 121816.62 23592.96 73400.32 00:18:11.407 =================================================================================================================== 00:18:11.407 Total : 8363.01 32.67 0.00 0.00 121816.62 23592.96 73400.32 00:18:11.407 0 00:18:11.407 20:23:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4040561 00:18:11.407 20:23:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 4040561 ']' 00:18:11.407 20:23:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 4040561 00:18:11.407 20:23:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:11.407 20:23:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:11.407 20:23:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4040561 00:18:11.407 20:23:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:11.407 20:23:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:11.407 20:23:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4040561' 00:18:11.407 killing process with pid 4040561 00:18:11.407 20:23:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 4040561 00:18:11.407 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.407 00:18:11.407 Latency(us) 00:18:11.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.407 =================================================================================================================== 00:18:11.407 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.407 20:23:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 4040561 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:11.407 rmmod nvme_tcp 00:18:11.407 rmmod nvme_fabrics 00:18:11.407 rmmod nvme_keyring 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 4040532 ']' 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 4040532 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 4040532 ']' 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 4040532 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4040532 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4040532' 00:18:11.407 killing process with pid 4040532 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 4040532 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 4040532 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.407 20:23:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.973 20:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:11.973 00:18:11.973 real 0m15.983s 00:18:11.973 user 0m22.633s 00:18:11.973 sys 0m2.980s 00:18:11.973 20:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.973 20:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:11.973 ************************************ 00:18:11.973 END TEST nvmf_queue_depth 00:18:11.973 ************************************ 00:18:11.973 20:23:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:11.973 20:23:50 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:11.973 20:23:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:11.973 20:23:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:11.973 20:23:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:12.232 ************************************ 00:18:12.232 START TEST nvmf_target_multipath 00:18:12.232 ************************************ 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:12.232 * Looking for test storage... 00:18:12.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.232 20:23:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.233 20:23:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.233 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:12.233 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:12.233 20:23:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:12.233 20:23:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:14.136 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:14.136 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:14.136 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:14.136 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:14.136 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:14.137 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:14.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:18:14.396 00:18:14.396 --- 10.0.0.2 ping statistics --- 00:18:14.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.396 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:14.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:18:14.396 00:18:14.396 --- 10.0.0.1 ping statistics --- 00:18:14.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.396 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:14.396 only one NIC for nvmf test 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.396 rmmod nvme_tcp 00:18:14.396 rmmod nvme_fabrics 00:18:14.396 rmmod nvme_keyring 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.396 20:23:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.381 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:16.381 20:23:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:16.381 20:23:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:16.382 00:18:16.382 real 0m4.323s 00:18:16.382 user 0m0.802s 00:18:16.382 sys 0m1.515s 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:16.382 20:23:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:16.382 ************************************ 00:18:16.382 END TEST nvmf_target_multipath 00:18:16.382 ************************************ 00:18:16.382 20:23:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:16.382 20:23:54 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:16.382 20:23:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:16.382 20:23:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:16.382 20:23:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:16.382 ************************************ 00:18:16.382 START TEST nvmf_zcopy 00:18:16.382 ************************************ 00:18:16.382 20:23:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:16.641 * Looking for test storage... 00:18:16.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:16.641 20:23:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:18.550 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:18.550 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:18.550 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:18.550 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:18.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:18:18.550 00:18:18.550 --- 10.0.0.2 ping statistics --- 00:18:18.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.550 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:18.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:18:18.550 00:18:18.550 --- 10.0.0.1 ping statistics --- 00:18:18.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.550 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=4045714 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 4045714 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 4045714 ']' 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.550 20:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:18.550 [2024-07-15 20:23:57.041169] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:18:18.550 [2024-07-15 20:23:57.041280] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.550 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.808 [2024-07-15 20:23:57.104736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.808 [2024-07-15 20:23:57.192187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.808 [2024-07-15 20:23:57.192247] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.808 [2024-07-15 20:23:57.192274] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.808 [2024-07-15 20:23:57.192286] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.808 [2024-07-15 20:23:57.192296] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.808 [2024-07-15 20:23:57.192323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.808 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.808 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:18.808 20:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.808 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:18.808 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:18.808 20:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.808 20:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:18.808 20:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:18.808 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.808 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:18.808 [2024-07-15 20:23:57.337524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.067 [2024-07-15 20:23:57.353721] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.067 malloc0 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:19.067 { 00:18:19.067 "params": { 00:18:19.067 "name": "Nvme$subsystem", 00:18:19.067 "trtype": "$TEST_TRANSPORT", 00:18:19.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:19.067 "adrfam": "ipv4", 00:18:19.067 "trsvcid": "$NVMF_PORT", 00:18:19.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:19.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:19.067 "hdgst": ${hdgst:-false}, 00:18:19.067 "ddgst": ${ddgst:-false} 00:18:19.067 }, 00:18:19.067 "method": "bdev_nvme_attach_controller" 00:18:19.067 } 00:18:19.067 EOF 00:18:19.067 )") 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:19.067 20:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:19.067 "params": { 00:18:19.068 "name": "Nvme1", 00:18:19.068 "trtype": "tcp", 00:18:19.068 "traddr": "10.0.0.2", 00:18:19.068 "adrfam": "ipv4", 00:18:19.068 "trsvcid": "4420", 00:18:19.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.068 "hdgst": false, 00:18:19.068 "ddgst": false 00:18:19.068 }, 00:18:19.068 "method": "bdev_nvme_attach_controller" 00:18:19.068 }' 00:18:19.068 [2024-07-15 20:23:57.436865] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:18:19.068 [2024-07-15 20:23:57.436970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4045744 ] 00:18:19.068 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.068 [2024-07-15 20:23:57.505655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.334 [2024-07-15 20:23:57.599731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.601 Running I/O for 10 seconds... 00:18:29.579 00:18:29.579 Latency(us) 00:18:29.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.579 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:29.579 Verification LBA range: start 0x0 length 0x1000 00:18:29.579 Nvme1n1 : 10.02 5776.64 45.13 0.00 0.00 22097.51 4053.52 28738.75 00:18:29.579 =================================================================================================================== 00:18:29.579 Total : 5776.64 45.13 0.00 0.00 22097.51 4053.52 28738.75 00:18:29.837 20:24:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4047528 00:18:29.837 20:24:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:29.837 20:24:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:29.837 20:24:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:29.837 20:24:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:29.837 20:24:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:29.837 20:24:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:29.837 20:24:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:29.837 20:24:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:29.837 { 00:18:29.837 "params": { 00:18:29.837 "name": "Nvme$subsystem", 00:18:29.837 "trtype": "$TEST_TRANSPORT", 00:18:29.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:29.837 "adrfam": "ipv4", 00:18:29.837 "trsvcid": "$NVMF_PORT", 00:18:29.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:29.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:29.837 "hdgst": ${hdgst:-false}, 00:18:29.837 "ddgst": ${ddgst:-false} 00:18:29.837 }, 00:18:29.837 "method": "bdev_nvme_attach_controller" 00:18:29.837 } 00:18:29.837 EOF 00:18:29.837 )") 00:18:29.837 20:24:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:29.837 [2024-07-15 20:24:08.210020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.837 [2024-07-15 20:24:08.210062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.837 20:24:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:29.837 20:24:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:29.837 20:24:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:29.837 "params": { 00:18:29.837 "name": "Nvme1", 00:18:29.837 "trtype": "tcp", 00:18:29.837 "traddr": "10.0.0.2", 00:18:29.837 "adrfam": "ipv4", 00:18:29.837 "trsvcid": "4420", 00:18:29.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.837 "hdgst": false, 00:18:29.837 "ddgst": false 00:18:29.837 }, 00:18:29.837 "method": "bdev_nvme_attach_controller" 00:18:29.837 }' 00:18:29.837 [2024-07-15 20:24:08.217977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.837 [2024-07-15 20:24:08.218001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.837 [2024-07-15 20:24:08.225991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.837 [2024-07-15 20:24:08.226014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.837 [2024-07-15 20:24:08.234006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.837 [2024-07-15 20:24:08.234028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.837 [2024-07-15 20:24:08.242024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.837 [2024-07-15 20:24:08.242045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.837 [2024-07-15 20:24:08.244640] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:18:29.837 [2024-07-15 20:24:08.244720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047528 ] 00:18:29.837 [2024-07-15 20:24:08.250048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.837 [2024-07-15 20:24:08.250071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.837 [2024-07-15 20:24:08.258077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.837 [2024-07-15 20:24:08.258101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.837 [2024-07-15 20:24:08.266098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.837 [2024-07-15 20:24:08.266121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.837 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.837 [2024-07-15 20:24:08.274122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.837 [2024-07-15 20:24:08.274146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.837 [2024-07-15 20:24:08.282133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.837 [2024-07-15 20:24:08.282167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.837 [2024-07-15 20:24:08.290176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.837 [2024-07-15 20:24:08.290196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.837 [2024-07-15 20:24:08.298196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.837 [2024-07-15 20:24:08.298220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.837 [2024-07-15 20:24:08.306213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.837 [2024-07-15 20:24:08.306250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.837 [2024-07-15 20:24:08.307046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.837 [2024-07-15 20:24:08.314294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.838 [2024-07-15 20:24:08.314331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.838 [2024-07-15 20:24:08.322293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.838 [2024-07-15 20:24:08.322326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.838 [2024-07-15 20:24:08.330327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.838 [2024-07-15 20:24:08.330362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.838 [2024-07-15 20:24:08.338309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.838 [2024-07-15 20:24:08.338334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.838 [2024-07-15 20:24:08.346333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.838 [2024-07-15 20:24:08.346356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.838 [2024-07-15 20:24:08.354357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.838 [2024-07-15 20:24:08.354383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.838 [2024-07-15 20:24:08.362415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.838 [2024-07-15 20:24:08.362454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.370404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.370429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.378423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.378447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.386444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.386468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.394464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.394487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.399054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.096 [2024-07-15 20:24:08.402487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.402511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.410507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.410531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.418559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.418597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.426579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.426616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.434608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.434647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.442630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.442669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.450650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.450689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.458670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.458709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.466667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.466691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.474718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.474757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.482737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.482776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.490739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.490764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.498757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.498781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.506780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.506804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.514812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.514841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.522834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.522862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.530855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.530892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.538894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.538941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.546912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.546956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.554950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.096 [2024-07-15 20:24:08.554973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.096 [2024-07-15 20:24:08.562963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.097 [2024-07-15 20:24:08.562985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.097 [2024-07-15 20:24:08.570996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.097 [2024-07-15 20:24:08.571021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.097 [2024-07-15 20:24:08.578996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.097 [2024-07-15 20:24:08.579017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.097 Running I/O for 5 seconds... 00:18:30.097 [2024-07-15 20:24:08.587026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.097 [2024-07-15 20:24:08.587047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.097 [2024-07-15 20:24:08.602301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.097 [2024-07-15 20:24:08.602345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.097 [2024-07-15 20:24:08.613670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.097 [2024-07-15 20:24:08.613701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.097 [2024-07-15 20:24:08.625122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.097 [2024-07-15 20:24:08.625151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.354 [2024-07-15 20:24:08.636846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.354 [2024-07-15 20:24:08.636889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.354 [2024-07-15 20:24:08.649056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.354 [2024-07-15 20:24:08.649091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.354 [2024-07-15 20:24:08.660440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.354 [2024-07-15 20:24:08.660470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.354 [2024-07-15 20:24:08.671942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.354 [2024-07-15 20:24:08.671969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.354 [2024-07-15 20:24:08.683620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.683651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.695302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.695333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.707280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.707311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.718595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.718625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.730745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.730775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.741845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.741885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.753430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.753461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.764217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.764249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.777615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.777645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.787935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.787962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.799617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.799647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.811407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.811438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.822963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.822990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.834456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.834486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.845961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.845989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.856944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.856973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.868347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.868386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.355 [2024-07-15 20:24:08.879896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.355 [2024-07-15 20:24:08.879940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:08.893405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:08.893436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:08.904060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:08.904088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:08.915901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:08.915945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:08.926785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:08.926815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:08.939974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:08.940001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:08.950624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:08.950654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:08.961578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:08.961608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:08.972688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:08.972718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:08.984189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:08.984220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:08.995686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:08.995716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:09.008978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:09.009006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:09.019226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:09.019257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:09.031038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:09.031065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:09.042282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:09.042313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:09.053357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:09.053388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:09.064406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:09.064433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:09.075454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:09.075484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:09.086909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:09.086964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:09.098498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:09.098529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:09.109785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:09.109816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:09.121427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:09.121454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.613 [2024-07-15 20:24:09.133349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.613 [2024-07-15 20:24:09.133380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.145223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.145254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.156509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.156540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.168090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.168118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.179581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.179612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.191055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.191083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.202439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.202469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.213656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.213686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.224684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.224715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.236132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.236185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.249470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.249501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.260177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.260204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.271610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.271640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.283298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.283328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.294550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.294580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.305805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.305843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.317245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.317275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.328775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.328805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.340527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.340558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.351780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.351810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.363198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.363228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.376270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.376301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.386577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.386608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-07-15 20:24:09.398189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-07-15 20:24:09.398234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.409756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.409787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.420830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.420861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.432166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.432209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.443560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.443590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.454902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.454945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.466283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.466313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.477507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.477537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.489416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.489447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.500867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.500931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.513153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.513199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.523072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.523099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.533374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.533401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.544581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.544611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.555697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.555726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.567118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.567145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.577899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.577929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.589916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.589943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.599062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.599088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.611380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.611406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.621310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.621337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.631660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.631687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.129 [2024-07-15 20:24:09.642057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.129 [2024-07-15 20:24:09.642083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-07-15 20:24:09.654652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-07-15 20:24:09.654680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.388 [2024-07-15 20:24:09.664121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.388 [2024-07-15 20:24:09.664148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.388 [2024-07-15 20:24:09.676812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.388 [2024-07-15 20:24:09.676839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.388 [2024-07-15 20:24:09.686658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.388 [2024-07-15 20:24:09.686685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.388 [2024-07-15 20:24:09.697620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.388 [2024-07-15 20:24:09.697647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.388 [2024-07-15 20:24:09.707567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.388 [2024-07-15 20:24:09.707593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.388 [2024-07-15 20:24:09.717707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.388 [2024-07-15 20:24:09.717733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.728178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.728205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.738696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.738724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.748849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.748882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.759329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.759356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.773036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.773063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.782858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.782893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.792930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.792956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.803216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.803243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.813473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.813499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.823583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.823610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.834537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.834564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.844662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.844690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.854964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.854991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.865407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.865435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.876095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.876122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.888702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.888729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.900360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.900387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-07-15 20:24:09.908975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-07-15 20:24:09.909001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.647 [2024-07-15 20:24:09.919944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.647 [2024-07-15 20:24:09.919971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.647 [2024-07-15 20:24:09.932197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.647 [2024-07-15 20:24:09.932224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.647 [2024-07-15 20:24:09.941195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.647 [2024-07-15 20:24:09.941222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.647 [2024-07-15 20:24:09.951784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.647 [2024-07-15 20:24:09.951810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.647 [2024-07-15 20:24:09.961998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.647 [2024-07-15 20:24:09.962026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.647 [2024-07-15 20:24:09.972362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.647 [2024-07-15 20:24:09.972389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.647 [2024-07-15 20:24:09.982739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.647 [2024-07-15 20:24:09.982767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.647 [2024-07-15 20:24:09.992843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.647 [2024-07-15 20:24:09.992870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.647 [2024-07-15 20:24:10.002849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.002886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.014093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.014126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.024814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.024846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.035681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.035713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.046371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.046403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.057854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.057893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.068733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.068763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.081633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.081663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.092164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.092207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.104209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.104240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.116137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.116165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.127511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.127541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.138732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.138762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.150548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.150579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.161893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.161937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.648 [2024-07-15 20:24:10.175131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.648 [2024-07-15 20:24:10.175176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.185930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.185959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.196861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.196902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.208401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.208431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.220032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.220059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.231371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.231402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.242601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.242632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.254211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.254241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.265490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.265520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.276279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.276309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.287589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.287620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.299196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.299227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.310641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.310671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.322419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.322448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.333901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.333944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.345304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.345345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.356845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.356883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.368359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.368389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.379947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.379974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.907 [2024-07-15 20:24:10.391395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.907 [2024-07-15 20:24:10.391425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.908 [2024-07-15 20:24:10.402593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.908 [2024-07-15 20:24:10.402622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.908 [2024-07-15 20:24:10.413819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.908 [2024-07-15 20:24:10.413849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.908 [2024-07-15 20:24:10.426818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.908 [2024-07-15 20:24:10.426848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.908 [2024-07-15 20:24:10.436797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.908 [2024-07-15 20:24:10.436827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.448527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.448558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.459811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.459841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.470980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.471007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.482542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.482571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.493452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.493482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.504763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.504794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.515927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.515954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.527400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.527430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.540611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.540641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.551354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.551384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.562536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.562575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.573742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.573772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.585011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.585039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.596067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.596094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.607767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.607797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.619293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.619323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.632476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.632506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.642730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.642760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.654239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.654269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.665358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.665388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.677091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.677118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.168 [2024-07-15 20:24:10.688372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.168 [2024-07-15 20:24:10.688403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.699437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.699467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.710285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.710314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.723351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.723381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.734182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.734209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.746067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.746094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.757330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.757361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.770624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.770655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.780731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.780769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.792946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.792973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.804013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.804040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.815543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.815573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.827400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.827430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.839093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.839120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.852806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.852836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.863668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.863697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.875379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.875409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.886739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.428 [2024-07-15 20:24:10.886769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.428 [2024-07-15 20:24:10.898418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.429 [2024-07-15 20:24:10.898447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.429 [2024-07-15 20:24:10.909856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.429 [2024-07-15 20:24:10.909896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.429 [2024-07-15 20:24:10.921012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.429 [2024-07-15 20:24:10.921039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.429 [2024-07-15 20:24:10.932256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.429 [2024-07-15 20:24:10.932286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.429 [2024-07-15 20:24:10.943577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.429 [2024-07-15 20:24:10.943607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.429 [2024-07-15 20:24:10.956678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.429 [2024-07-15 20:24:10.956709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:10.966852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:10.966891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:10.978781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:10.978811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:10.990478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:10.990507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.002074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.002109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.013594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.013625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.025289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.025319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.037096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.037123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.048458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.048488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.059925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.059952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.071193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.071224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.082513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.082543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.094119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.094146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.105746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.105776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.117205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.117235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.128114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.128141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.139847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.139884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.150793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.150823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.162067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.162094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.173328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.173357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.184575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.184604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.196150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.196198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.206827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.206854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.688 [2024-07-15 20:24:11.217454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.688 [2024-07-15 20:24:11.217481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.948 [2024-07-15 20:24:11.228736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.948 [2024-07-15 20:24:11.228767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.948 [2024-07-15 20:24:11.240191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.948 [2024-07-15 20:24:11.240222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.948 [2024-07-15 20:24:11.251940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.948 [2024-07-15 20:24:11.251968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.948 [2024-07-15 20:24:11.264170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.948 [2024-07-15 20:24:11.264204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.948 [2024-07-15 20:24:11.276254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.948 [2024-07-15 20:24:11.276285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.948 [2024-07-15 20:24:11.287982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.948 [2024-07-15 20:24:11.288010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.948 [2024-07-15 20:24:11.299517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.948 [2024-07-15 20:24:11.299548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.948 [2024-07-15 20:24:11.311195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.311226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.322434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.322466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.334340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.334370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.345624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.345654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.356941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.356968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.368274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.368301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.380245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.380276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.392155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.392200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.404015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.404042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.415983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.416010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.427568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.427599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.439226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.439256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.450867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.450939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.462661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.462690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-07-15 20:24:11.474065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-07-15 20:24:11.474092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.485416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.485446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.496990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.497017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.507751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.507781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.518853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.518891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.530427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.530458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.541625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.541656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.552826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.552856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.564626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.564656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.575963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.575991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.587292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.587323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.598641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.598673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.609927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.609956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.621315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.621346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.632961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.632989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.643659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.643690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.654695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.654725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.665665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.665696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.677065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.677093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.688185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.688216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.699527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.699557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.713097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-07-15 20:24:11.713125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-07-15 20:24:11.724007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.210 [2024-07-15 20:24:11.724035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.210 [2024-07-15 20:24:11.735612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.210 [2024-07-15 20:24:11.735643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.746849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.746888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.757734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.757765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.769142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.769187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.780357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.780388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.791445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.791475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.802466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.802497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.813596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.813626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.824609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.824639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.835934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.835961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.847103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.847131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.860179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.860209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.870523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.870553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.882300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.882331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.893818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.893849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.905447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.905478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.917253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.917284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.929028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.929055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.940751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.940781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.951828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.951860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.963407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.963437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.974861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.974899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.988234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.988265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.471 [2024-07-15 20:24:11.998627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.471 [2024-07-15 20:24:11.998657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.009373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.009404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.020742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.020772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.032461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.032493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.043602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.043633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.055033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.055062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.066177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.066208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.079035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.079073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.089391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.089421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.100409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.100439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.112078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.112106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.123431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.123462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.134627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.134658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.146149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.146195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.157519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.157549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.168226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.168256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.179710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.179740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.191206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.191236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.731 [2024-07-15 20:24:12.202538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.731 [2024-07-15 20:24:12.202568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.732 [2024-07-15 20:24:12.214261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.732 [2024-07-15 20:24:12.214293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.732 [2024-07-15 20:24:12.225867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.732 [2024-07-15 20:24:12.225907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.732 [2024-07-15 20:24:12.237279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.732 [2024-07-15 20:24:12.237309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.732 [2024-07-15 20:24:12.249019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.732 [2024-07-15 20:24:12.249056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.732 [2024-07-15 20:24:12.260589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.732 [2024-07-15 20:24:12.260620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.272038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.272066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.283377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.283408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.294781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.294825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.306387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.306418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.319678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.319709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.330267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.330298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.341674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.341705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.353016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.353043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.364487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.364517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.376287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.376317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.387671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.387702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.399016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.399044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.410601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.410631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.421618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.421648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.432472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.432502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.443935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.443963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.455115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.455144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.466478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.991 [2024-07-15 20:24:12.466509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.991 [2024-07-15 20:24:12.477979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.992 [2024-07-15 20:24:12.478007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.992 [2024-07-15 20:24:12.488949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.992 [2024-07-15 20:24:12.488993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.992 [2024-07-15 20:24:12.500202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.992 [2024-07-15 20:24:12.500233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.992 [2024-07-15 20:24:12.511490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.992 [2024-07-15 20:24:12.511528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.522556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.522587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.534199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.534230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.545659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.545690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.557054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.557083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.568847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.568888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.580623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.580653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.591656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.591686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.603417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.603447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.616589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.616620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.626922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.626950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.638854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.638895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.650177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.650209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.661759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.661789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.673354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.673384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.684875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.684930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.696307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.696338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.707644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.707675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.719448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.719479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.731060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.731095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.742298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.742329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.753782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.753814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.764989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.765016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.252 [2024-07-15 20:24:12.776327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.252 [2024-07-15 20:24:12.776357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.511 [2024-07-15 20:24:12.788022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.511 [2024-07-15 20:24:12.788050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.511 [2024-07-15 20:24:12.799312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.511 [2024-07-15 20:24:12.799343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.511 [2024-07-15 20:24:12.810643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.511 [2024-07-15 20:24:12.810673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.511 [2024-07-15 20:24:12.822003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.511 [2024-07-15 20:24:12.822031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.511 [2024-07-15 20:24:12.833432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.511 [2024-07-15 20:24:12.833463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.511 [2024-07-15 20:24:12.845284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.511 [2024-07-15 20:24:12.845315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.511 [2024-07-15 20:24:12.856751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:12.856782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:12.868101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:12.868129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:12.879323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:12.879354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:12.890377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:12.890407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:12.901179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:12.901209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:12.912164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:12.912209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:12.923596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:12.923628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:12.935131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:12.935174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:12.946470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:12.946501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:12.957866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:12.957906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:12.969126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:12.969172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:12.980302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:12.980333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:12.991854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:12.991892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:13.003181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:13.003211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:13.014755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:13.014786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:13.025991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:13.026018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.512 [2024-07-15 20:24:13.037292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.512 [2024-07-15 20:24:13.037323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.048505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.048536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.059767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.059797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.071066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.071094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.082117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.082145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.093214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.093244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.104421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.104451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.115671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.115701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.127225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.127255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.138294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.138325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.149485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.149515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.161663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.161693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.173251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.173281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.189484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.189516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.199340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.199371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.211022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.211049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.221684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.221715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.233070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.770 [2024-07-15 20:24:13.233098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.770 [2024-07-15 20:24:13.245706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.771 [2024-07-15 20:24:13.245736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.771 [2024-07-15 20:24:13.256401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.771 [2024-07-15 20:24:13.256431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.771 [2024-07-15 20:24:13.268411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.771 [2024-07-15 20:24:13.268441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.771 [2024-07-15 20:24:13.279689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.771 [2024-07-15 20:24:13.279719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.771 [2024-07-15 20:24:13.291084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.771 [2024-07-15 20:24:13.291111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.302850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.302891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.313990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.314017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.325102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.325130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.336303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.336334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.349610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.349641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.359726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.359756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.371169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.371199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.382399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.382430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.393974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.394001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.405077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.405104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.418554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.418584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.428812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.428843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.440802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.440832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.451998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.452026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.463001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.463029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.474315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.474346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.485755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.485786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.497147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.497191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.508431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.508461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.521615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.521646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.531523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.531554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.543745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.543776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.028 [2024-07-15 20:24:13.555098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.028 [2024-07-15 20:24:13.555137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.566720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.566752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.580075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.580102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.590310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.590344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.601842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.601872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 00:18:35.288 Latency(us) 00:18:35.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.288 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:35.288 Nvme1n1 : 5.01 11273.53 88.07 0.00 0.00 11337.51 5097.24 26408.58 00:18:35.288 =================================================================================================================== 00:18:35.288 Total : 11273.53 88.07 0.00 0.00 11337.51 5097.24 26408.58 00:18:35.288 [2024-07-15 20:24:13.608190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.608219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.616198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.616225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.624257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.624301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.632295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.632342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.640316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.640361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.648325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.648370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.656341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.656385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.664368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.664428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.672386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.672446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.680423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.680470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.688434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.688481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.696475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.696523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.704479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.704525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.712509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.712557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.720550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.720614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.728555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.728597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.736586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.736633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.744572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.744603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.752579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.752605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.760646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.760688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.768670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.768715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.776689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.776732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.784675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.784702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.792720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.792756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.800777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.800825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.808787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.808833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.288 [2024-07-15 20:24:13.816764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.288 [2024-07-15 20:24:13.816789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.547 [2024-07-15 20:24:13.824784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.547 [2024-07-15 20:24:13.824809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.547 [2024-07-15 20:24:13.832808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.547 [2024-07-15 20:24:13.832832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4047528) - No such process 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4047528 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:35.547 delay0 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.547 20:24:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:35.547 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.547 [2024-07-15 20:24:13.950204] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:42.120 Initializing NVMe Controllers 00:18:42.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:42.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:42.120 Initialization complete. Launching workers. 00:18:42.120 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 115 00:18:42.120 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 402, failed to submit 33 00:18:42.120 success 198, unsuccess 204, failed 0 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.120 rmmod nvme_tcp 00:18:42.120 rmmod nvme_fabrics 00:18:42.120 rmmod nvme_keyring 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 4045714 ']' 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 4045714 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 4045714 ']' 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 4045714 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4045714 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4045714' 00:18:42.120 killing process with pid 4045714 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 4045714 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 4045714 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.120 20:24:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.052 20:24:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:44.052 00:18:44.052 real 0m27.547s 00:18:44.052 user 0m40.774s 00:18:44.052 sys 0m8.201s 00:18:44.052 20:24:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:44.052 20:24:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:44.052 ************************************ 00:18:44.052 END TEST nvmf_zcopy 00:18:44.052 ************************************ 00:18:44.052 20:24:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:44.052 20:24:22 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:44.052 20:24:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:44.052 20:24:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.052 20:24:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:44.052 ************************************ 00:18:44.052 START TEST nvmf_nmic 00:18:44.052 ************************************ 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:44.052 * Looking for test storage... 00:18:44.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.052 20:24:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:44.053 20:24:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.584 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:46.585 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:46.585 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:46.585 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:46.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:46.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:18:46.585 00:18:46.585 --- 10.0.0.2 ping statistics --- 00:18:46.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.585 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:46.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:18:46.585 00:18:46.585 --- 10.0.0.1 ping statistics --- 00:18:46.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.585 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=4050927 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 4050927 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 4050927 ']' 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.585 20:24:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.585 [2024-07-15 20:24:24.790130] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:18:46.585 [2024-07-15 20:24:24.790235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.585 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.585 [2024-07-15 20:24:24.854065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:46.585 [2024-07-15 20:24:24.941453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.585 [2024-07-15 20:24:24.941505] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.585 [2024-07-15 20:24:24.941519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.585 [2024-07-15 20:24:24.941530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.585 [2024-07-15 20:24:24.941541] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.585 [2024-07-15 20:24:24.941660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.585 [2024-07-15 20:24:24.942388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.585 [2024-07-15 20:24:24.942455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:46.585 [2024-07-15 20:24:24.942458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.585 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.585 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:46.585 20:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:46.585 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:46.585 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.585 20:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.585 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:46.585 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.586 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.586 [2024-07-15 20:24:25.095840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.586 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.586 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:46.586 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.586 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.844 Malloc0 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.844 [2024-07-15 20:24:25.147728] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:46.844 test case1: single bdev can't be used in multiple subsystems 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.844 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.844 [2024-07-15 20:24:25.171570] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:46.844 [2024-07-15 20:24:25.171601] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:46.844 [2024-07-15 20:24:25.171617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.844 request: 00:18:46.844 { 00:18:46.844 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:46.844 "namespace": { 00:18:46.844 "bdev_name": "Malloc0", 00:18:46.844 "no_auto_visible": false 00:18:46.845 }, 00:18:46.845 "method": "nvmf_subsystem_add_ns", 00:18:46.845 "req_id": 1 00:18:46.845 } 00:18:46.845 Got JSON-RPC error response 00:18:46.845 response: 00:18:46.845 { 00:18:46.845 "code": -32602, 00:18:46.845 "message": "Invalid parameters" 00:18:46.845 } 00:18:46.845 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:46.845 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:46.845 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:46.845 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:46.845 Adding namespace failed - expected result. 00:18:46.845 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:46.845 test case2: host connect to nvmf target in multiple paths 00:18:46.845 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:46.845 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.845 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.845 [2024-07-15 20:24:25.179688] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:46.845 20:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.845 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:47.414 20:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:47.981 20:24:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:47.981 20:24:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:47.981 20:24:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.981 20:24:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:47.981 20:24:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:50.513 20:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:50.513 20:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:50.513 20:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:50.513 20:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:50.513 20:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.513 20:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:50.513 20:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:50.513 [global] 00:18:50.513 thread=1 00:18:50.513 invalidate=1 00:18:50.513 rw=write 00:18:50.513 time_based=1 00:18:50.513 runtime=1 00:18:50.513 ioengine=libaio 00:18:50.513 direct=1 00:18:50.513 bs=4096 00:18:50.513 iodepth=1 00:18:50.513 norandommap=0 00:18:50.513 numjobs=1 00:18:50.513 00:18:50.513 verify_dump=1 00:18:50.513 verify_backlog=512 00:18:50.513 verify_state_save=0 00:18:50.513 do_verify=1 00:18:50.513 verify=crc32c-intel 00:18:50.513 [job0] 00:18:50.513 filename=/dev/nvme0n1 00:18:50.513 Could not set queue depth (nvme0n1) 00:18:50.513 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.513 fio-3.35 00:18:50.513 Starting 1 thread 00:18:51.445 00:18:51.445 job0: (groupid=0, jobs=1): err= 0: pid=4051563: Mon Jul 15 20:24:29 2024 00:18:51.445 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:51.445 slat (nsec): min=4789, max=63055, avg=17020.11, stdev=10439.57 00:18:51.445 clat (usec): min=285, max=1743, avg=367.21, stdev=60.28 00:18:51.445 lat (usec): min=291, max=1749, avg=384.23, stdev=65.74 00:18:51.445 clat percentiles (usec): 00:18:51.445 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 330], 00:18:51.445 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 371], 00:18:51.445 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 424], 95.00th=[ 461], 00:18:51.445 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[ 734], 99.95th=[ 1745], 00:18:51.445 | 99.99th=[ 1745] 00:18:51.445 write: IOPS=1574, BW=6298KiB/s (6449kB/s)(6304KiB/1001msec); 0 zone resets 00:18:51.445 slat (nsec): min=6091, max=72320, avg=18107.56, stdev=9393.84 00:18:51.445 clat (usec): min=186, max=426, avg=231.80, stdev=38.51 00:18:51.445 lat (usec): min=193, max=457, avg=249.91, stdev=45.94 00:18:51.445 clat percentiles (usec): 00:18:51.445 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:18:51.445 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:18:51.445 | 70.00th=[ 233], 80.00th=[ 245], 90.00th=[ 273], 95.00th=[ 330], 00:18:51.445 | 99.00th=[ 383], 99.50th=[ 388], 99.90th=[ 404], 99.95th=[ 429], 00:18:51.445 | 99.99th=[ 429] 00:18:51.445 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:18:51.445 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:51.445 lat (usec) : 250=42.32%, 500=56.65%, 750=1.00% 00:18:51.445 lat (msec) : 2=0.03% 00:18:51.445 cpu : usr=4.20%, sys=5.80%, ctx=3112, majf=0, minf=2 00:18:51.445 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.445 issued rwts: total=1536,1576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.445 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.445 00:18:51.445 Run status group 0 (all jobs): 00:18:51.445 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:18:51.445 WRITE: bw=6298KiB/s (6449kB/s), 6298KiB/s-6298KiB/s (6449kB/s-6449kB/s), io=6304KiB (6455kB), run=1001-1001msec 00:18:51.445 00:18:51.445 Disk stats (read/write): 00:18:51.445 nvme0n1: ios=1321/1536, merge=0/0, ticks=607/342, in_queue=949, util=96.29% 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:51.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.445 20:24:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.445 rmmod nvme_tcp 00:18:51.730 rmmod nvme_fabrics 00:18:51.730 rmmod nvme_keyring 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 4050927 ']' 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 4050927 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 4050927 ']' 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 4050927 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4050927 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4050927' 00:18:51.730 killing process with pid 4050927 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 4050927 00:18:51.730 20:24:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 4050927 00:18:51.990 20:24:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:51.990 20:24:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:51.990 20:24:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:51.990 20:24:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.990 20:24:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.990 20:24:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.990 20:24:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.990 20:24:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.894 20:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:53.894 00:18:53.894 real 0m9.859s 00:18:53.894 user 0m21.958s 00:18:53.894 sys 0m2.527s 00:18:53.894 20:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:53.894 20:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.894 ************************************ 00:18:53.894 END TEST nvmf_nmic 00:18:53.894 ************************************ 00:18:53.894 20:24:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:53.894 20:24:32 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:53.894 20:24:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:53.894 20:24:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.894 20:24:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:53.894 ************************************ 00:18:53.894 START TEST nvmf_fio_target 00:18:53.894 ************************************ 00:18:53.894 20:24:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:54.151 * Looking for test storage... 00:18:54.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:54.151 20:24:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.151 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:54.151 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:54.152 20:24:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:56.053 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:56.054 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:56.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:56.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:56.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:56.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:18:56.054 00:18:56.054 --- 10.0.0.2 ping statistics --- 00:18:56.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.054 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:56.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:18:56.054 00:18:56.054 --- 10.0.0.1 ping statistics --- 00:18:56.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.054 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:56.054 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=4053639 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 4053639 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 4053639 ']' 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.315 20:24:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.315 [2024-07-15 20:24:34.652171] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:18:56.315 [2024-07-15 20:24:34.652261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.315 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.315 [2024-07-15 20:24:34.714643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.315 [2024-07-15 20:24:34.799516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.315 [2024-07-15 20:24:34.799567] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.315 [2024-07-15 20:24:34.799591] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.315 [2024-07-15 20:24:34.799602] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.315 [2024-07-15 20:24:34.799612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.315 [2024-07-15 20:24:34.799750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.315 [2024-07-15 20:24:34.799816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.315 [2024-07-15 20:24:34.799889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.315 [2024-07-15 20:24:34.799894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.573 20:24:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.573 20:24:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:56.573 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:56.573 20:24:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:56.573 20:24:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.573 20:24:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.573 20:24:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:56.831 [2024-07-15 20:24:35.226783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.831 20:24:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.088 20:24:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:57.088 20:24:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.346 20:24:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:57.346 20:24:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.912 20:24:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:57.912 20:24:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:58.169 20:24:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:58.169 20:24:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:58.428 20:24:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:58.687 20:24:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:58.687 20:24:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:58.946 20:24:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:58.946 20:24:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:58.946 20:24:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:58.946 20:24:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:59.204 20:24:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:59.502 20:24:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:59.502 20:24:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:59.760 20:24:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:59.760 20:24:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:00.018 20:24:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:00.276 [2024-07-15 20:24:38.713719] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.276 20:24:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:00.534 20:24:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:00.792 20:24:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:01.726 20:24:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:01.726 20:24:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:01.726 20:24:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:01.726 20:24:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:01.726 20:24:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:01.726 20:24:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:03.634 20:24:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:03.634 20:24:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:03.634 20:24:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:03.634 20:24:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:03.634 20:24:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:03.634 20:24:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:03.634 20:24:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:03.634 [global] 00:19:03.634 thread=1 00:19:03.634 invalidate=1 00:19:03.634 rw=write 00:19:03.634 time_based=1 00:19:03.634 runtime=1 00:19:03.634 ioengine=libaio 00:19:03.634 direct=1 00:19:03.634 bs=4096 00:19:03.634 iodepth=1 00:19:03.634 norandommap=0 00:19:03.634 numjobs=1 00:19:03.634 00:19:03.634 verify_dump=1 00:19:03.634 verify_backlog=512 00:19:03.634 verify_state_save=0 00:19:03.634 do_verify=1 00:19:03.634 verify=crc32c-intel 00:19:03.634 [job0] 00:19:03.634 filename=/dev/nvme0n1 00:19:03.634 [job1] 00:19:03.634 filename=/dev/nvme0n2 00:19:03.634 [job2] 00:19:03.634 filename=/dev/nvme0n3 00:19:03.634 [job3] 00:19:03.634 filename=/dev/nvme0n4 00:19:03.634 Could not set queue depth (nvme0n1) 00:19:03.634 Could not set queue depth (nvme0n2) 00:19:03.634 Could not set queue depth (nvme0n3) 00:19:03.634 Could not set queue depth (nvme0n4) 00:19:03.893 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.893 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.893 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.893 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.893 fio-3.35 00:19:03.893 Starting 4 threads 00:19:05.270 00:19:05.270 job0: (groupid=0, jobs=1): err= 0: pid=4054702: Mon Jul 15 20:24:43 2024 00:19:05.270 read: IOPS=990, BW=3961KiB/s (4057kB/s)(4112KiB/1038msec) 00:19:05.270 slat (nsec): min=7254, max=35293, avg=10027.54, stdev=2802.83 00:19:05.270 clat (usec): min=297, max=41216, avg=528.88, stdev=2534.02 00:19:05.270 lat (usec): min=308, max=41228, avg=538.90, stdev=2534.67 00:19:05.270 clat percentiles (usec): 00:19:05.270 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 343], 00:19:05.270 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 375], 00:19:05.270 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 412], 95.00th=[ 441], 00:19:05.270 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[41157], 99.95th=[41157], 00:19:05.270 | 99.99th=[41157] 00:19:05.270 write: IOPS=1479, BW=5919KiB/s (6061kB/s)(6144KiB/1038msec); 0 zone resets 00:19:05.270 slat (usec): min=9, max=40292, avg=62.78, stdev=1329.49 00:19:05.270 clat (usec): min=189, max=438, avg=246.02, stdev=40.58 00:19:05.270 lat (usec): min=202, max=40673, avg=308.80, stdev=1333.45 00:19:05.270 clat percentiles (usec): 00:19:05.270 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:19:05.270 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 243], 00:19:05.270 | 70.00th=[ 258], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 326], 00:19:05.270 | 99.00th=[ 383], 99.50th=[ 412], 99.90th=[ 433], 99.95th=[ 441], 00:19:05.270 | 99.99th=[ 441] 00:19:05.270 bw ( KiB/s): min= 5648, max= 6640, per=39.05%, avg=6144.00, stdev=701.45, samples=2 00:19:05.270 iops : min= 1412, max= 1660, avg=1536.00, stdev=175.36, samples=2 00:19:05.270 lat (usec) : 250=39.35%, 500=59.67%, 750=0.82% 00:19:05.270 lat (msec) : 50=0.16% 00:19:05.270 cpu : usr=2.70%, sys=3.76%, ctx=2567, majf=0, minf=1 00:19:05.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.270 issued rwts: total=1028,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.270 job1: (groupid=0, jobs=1): err= 0: pid=4054703: Mon Jul 15 20:24:43 2024 00:19:05.270 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:19:05.270 slat (nsec): min=10837, max=20096, avg=16250.41, stdev=2180.24 00:19:05.270 clat (usec): min=6896, max=41192, avg=39447.99, stdev=7270.79 00:19:05.270 lat (usec): min=6912, max=41203, avg=39464.24, stdev=7270.77 00:19:05.270 clat percentiles (usec): 00:19:05.270 | 1.00th=[ 6915], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:05.270 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:05.270 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:05.270 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:05.270 | 99.99th=[41157] 00:19:05.270 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:19:05.270 slat (nsec): min=9146, max=74802, avg=20506.19, stdev=10104.23 00:19:05.270 clat (usec): min=198, max=452, avg=270.01, stdev=42.28 00:19:05.270 lat (usec): min=208, max=475, avg=290.52, stdev=47.56 00:19:05.270 clat percentiles (usec): 00:19:05.270 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 233], 00:19:05.270 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 277], 00:19:05.270 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 351], 00:19:05.270 | 99.00th=[ 392], 99.50th=[ 400], 99.90th=[ 453], 99.95th=[ 453], 00:19:05.270 | 99.99th=[ 453] 00:19:05.270 bw ( KiB/s): min= 4096, max= 4096, per=26.03%, avg=4096.00, stdev= 0.00, samples=1 00:19:05.270 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:05.270 lat (usec) : 250=38.39%, 500=57.49% 00:19:05.270 lat (msec) : 10=0.19%, 50=3.93% 00:19:05.270 cpu : usr=1.18%, sys=0.78%, ctx=536, majf=0, minf=1 00:19:05.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.270 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.270 job2: (groupid=0, jobs=1): err= 0: pid=4054704: Mon Jul 15 20:24:43 2024 00:19:05.270 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:05.270 slat (nsec): min=5519, max=25593, avg=11838.89, stdev=4319.96 00:19:05.270 clat (usec): min=318, max=593, avg=368.08, stdev=31.23 00:19:05.270 lat (usec): min=329, max=608, avg=379.92, stdev=33.00 00:19:05.270 clat percentiles (usec): 00:19:05.270 | 1.00th=[ 330], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 347], 00:19:05.270 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 371], 00:19:05.270 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 392], 95.00th=[ 412], 00:19:05.270 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 586], 99.95th=[ 594], 00:19:05.270 | 99.99th=[ 594] 00:19:05.270 write: IOPS=1521, BW=6086KiB/s (6232kB/s)(6092KiB/1001msec); 0 zone resets 00:19:05.270 slat (nsec): min=7337, max=67873, avg=20438.00, stdev=10636.16 00:19:05.270 clat (usec): min=241, max=1929, avg=372.58, stdev=86.01 00:19:05.270 lat (usec): min=256, max=1947, avg=393.02, stdev=87.74 00:19:05.270 clat percentiles (usec): 00:19:05.270 | 1.00th=[ 269], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 322], 00:19:05.270 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 375], 00:19:05.270 | 70.00th=[ 392], 80.00th=[ 408], 90.00th=[ 437], 95.00th=[ 465], 00:19:05.270 | 99.00th=[ 586], 99.50th=[ 742], 99.90th=[ 1647], 99.95th=[ 1926], 00:19:05.270 | 99.99th=[ 1926] 00:19:05.270 bw ( KiB/s): min= 5768, max= 5768, per=36.66%, avg=5768.00, stdev= 0.00, samples=1 00:19:05.270 iops : min= 1442, max= 1442, avg=1442.00, stdev= 0.00, samples=1 00:19:05.270 lat (usec) : 250=0.20%, 500=97.02%, 750=2.51%, 1000=0.08% 00:19:05.270 lat (msec) : 2=0.20% 00:19:05.270 cpu : usr=1.80%, sys=4.70%, ctx=2548, majf=0, minf=1 00:19:05.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.270 issued rwts: total=1024,1523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.270 job3: (groupid=0, jobs=1): err= 0: pid=4054705: Mon Jul 15 20:24:43 2024 00:19:05.270 read: IOPS=183, BW=735KiB/s (753kB/s)(736KiB/1001msec) 00:19:05.270 slat (nsec): min=7249, max=71261, avg=25873.10, stdev=11404.56 00:19:05.270 clat (usec): min=370, max=41146, avg=4413.31, stdev=12078.44 00:19:05.270 lat (usec): min=385, max=41161, avg=4439.19, stdev=12075.16 00:19:05.270 clat percentiles (usec): 00:19:05.270 | 1.00th=[ 371], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 408], 00:19:05.270 | 30.00th=[ 424], 40.00th=[ 437], 50.00th=[ 449], 60.00th=[ 461], 00:19:05.270 | 70.00th=[ 486], 80.00th=[ 510], 90.00th=[ 562], 95.00th=[41157], 00:19:05.270 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:05.270 | 99.99th=[41157] 00:19:05.270 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:05.270 slat (usec): min=7, max=1181, avg=23.28, stdev=52.85 00:19:05.270 clat (usec): min=190, max=759, avg=327.31, stdev=101.49 00:19:05.270 lat (usec): min=200, max=1513, avg=350.59, stdev=119.71 00:19:05.270 clat percentiles (usec): 00:19:05.270 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 223], 00:19:05.270 | 30.00th=[ 237], 40.00th=[ 273], 50.00th=[ 326], 60.00th=[ 359], 00:19:05.270 | 70.00th=[ 392], 80.00th=[ 420], 90.00th=[ 453], 95.00th=[ 486], 00:19:05.270 | 99.00th=[ 553], 99.50th=[ 734], 99.90th=[ 758], 99.95th=[ 758], 00:19:05.270 | 99.99th=[ 758] 00:19:05.270 bw ( KiB/s): min= 4096, max= 4096, per=26.03%, avg=4096.00, stdev= 0.00, samples=1 00:19:05.270 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:05.270 lat (usec) : 250=25.72%, 500=65.52%, 750=5.89%, 1000=0.29% 00:19:05.270 lat (msec) : 50=2.59% 00:19:05.270 cpu : usr=0.90%, sys=1.40%, ctx=699, majf=0, minf=2 00:19:05.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.270 issued rwts: total=184,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.270 00:19:05.270 Run status group 0 (all jobs): 00:19:05.270 READ: bw=8701KiB/s (8910kB/s), 86.1KiB/s-4092KiB/s (88.2kB/s-4190kB/s), io=9032KiB (9249kB), run=1001-1038msec 00:19:05.270 WRITE: bw=15.4MiB/s (16.1MB/s), 2004KiB/s-6086KiB/s (2052kB/s-6232kB/s), io=15.9MiB (16.7MB), run=1001-1038msec 00:19:05.270 00:19:05.270 Disk stats (read/write): 00:19:05.270 nvme0n1: ios=1068/1439, merge=0/0, ticks=641/341, in_queue=982, util=86.67% 00:19:05.270 nvme0n2: ios=45/512, merge=0/0, ticks=1568/126, in_queue=1694, util=89.11% 00:19:05.270 nvme0n3: ios=1081/1117, merge=0/0, ticks=449/412, in_queue=861, util=95.30% 00:19:05.270 nvme0n4: ios=73/512, merge=0/0, ticks=766/161, in_queue=927, util=94.31% 00:19:05.270 20:24:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:05.270 [global] 00:19:05.270 thread=1 00:19:05.270 invalidate=1 00:19:05.270 rw=randwrite 00:19:05.270 time_based=1 00:19:05.270 runtime=1 00:19:05.270 ioengine=libaio 00:19:05.270 direct=1 00:19:05.270 bs=4096 00:19:05.270 iodepth=1 00:19:05.270 norandommap=0 00:19:05.270 numjobs=1 00:19:05.270 00:19:05.270 verify_dump=1 00:19:05.270 verify_backlog=512 00:19:05.270 verify_state_save=0 00:19:05.270 do_verify=1 00:19:05.270 verify=crc32c-intel 00:19:05.270 [job0] 00:19:05.270 filename=/dev/nvme0n1 00:19:05.270 [job1] 00:19:05.270 filename=/dev/nvme0n2 00:19:05.270 [job2] 00:19:05.270 filename=/dev/nvme0n3 00:19:05.270 [job3] 00:19:05.270 filename=/dev/nvme0n4 00:19:05.270 Could not set queue depth (nvme0n1) 00:19:05.270 Could not set queue depth (nvme0n2) 00:19:05.270 Could not set queue depth (nvme0n3) 00:19:05.270 Could not set queue depth (nvme0n4) 00:19:05.270 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.270 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.270 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.270 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.270 fio-3.35 00:19:05.270 Starting 4 threads 00:19:06.646 00:19:06.646 job0: (groupid=0, jobs=1): err= 0: pid=4054942: Mon Jul 15 20:24:44 2024 00:19:06.646 read: IOPS=1247, BW=4991KiB/s (5111kB/s)(4996KiB/1001msec) 00:19:06.646 slat (nsec): min=4703, max=57762, avg=17362.46, stdev=10360.45 00:19:06.646 clat (usec): min=283, max=1959, avg=444.88, stdev=122.78 00:19:06.646 lat (usec): min=289, max=1993, avg=462.24, stdev=128.41 00:19:06.646 clat percentiles (usec): 00:19:06.646 | 1.00th=[ 297], 5.00th=[ 318], 10.00th=[ 367], 20.00th=[ 383], 00:19:06.646 | 30.00th=[ 392], 40.00th=[ 396], 50.00th=[ 404], 60.00th=[ 449], 00:19:06.646 | 70.00th=[ 469], 80.00th=[ 494], 90.00th=[ 523], 95.00th=[ 676], 00:19:06.646 | 99.00th=[ 906], 99.50th=[ 1090], 99.90th=[ 1696], 99.95th=[ 1958], 00:19:06.646 | 99.99th=[ 1958] 00:19:06.646 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:06.646 slat (nsec): min=6427, max=65023, avg=15040.08, stdev=9128.95 00:19:06.646 clat (usec): min=179, max=932, avg=252.10, stdev=97.35 00:19:06.646 lat (usec): min=186, max=973, avg=267.14, stdev=102.93 00:19:06.646 clat percentiles (usec): 00:19:06.646 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:19:06.646 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:19:06.646 | 70.00th=[ 231], 80.00th=[ 258], 90.00th=[ 396], 95.00th=[ 494], 00:19:06.646 | 99.00th=[ 603], 99.50th=[ 635], 99.90th=[ 898], 99.95th=[ 930], 00:19:06.647 | 99.99th=[ 930] 00:19:06.647 bw ( KiB/s): min= 5376, max= 5376, per=45.46%, avg=5376.00, stdev= 0.00, samples=1 00:19:06.647 iops : min= 1344, max= 1344, avg=1344.00, stdev= 0.00, samples=1 00:19:06.647 lat (usec) : 250=43.12%, 500=46.79%, 750=8.90%, 1000=0.90% 00:19:06.647 lat (msec) : 2=0.29% 00:19:06.647 cpu : usr=2.60%, sys=4.40%, ctx=2788, majf=0, minf=1 00:19:06.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.647 issued rwts: total=1249,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.647 job1: (groupid=0, jobs=1): err= 0: pid=4054943: Mon Jul 15 20:24:44 2024 00:19:06.647 read: IOPS=345, BW=1382KiB/s (1415kB/s)(1436KiB/1039msec) 00:19:06.647 slat (nsec): min=11187, max=54798, avg=29878.58, stdev=7936.91 00:19:06.647 clat (usec): min=366, max=41021, avg=2480.91, stdev=8852.95 00:19:06.647 lat (usec): min=388, max=41036, avg=2510.79, stdev=8850.18 00:19:06.647 clat percentiles (usec): 00:19:06.647 | 1.00th=[ 371], 5.00th=[ 388], 10.00th=[ 400], 20.00th=[ 433], 00:19:06.647 | 30.00th=[ 441], 40.00th=[ 445], 50.00th=[ 449], 60.00th=[ 453], 00:19:06.647 | 70.00th=[ 461], 80.00th=[ 478], 90.00th=[ 506], 95.00th=[40633], 00:19:06.647 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:06.647 | 99.99th=[41157] 00:19:06.647 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:19:06.647 slat (nsec): min=7428, max=66045, avg=17307.36, stdev=8853.31 00:19:06.647 clat (usec): min=187, max=446, avg=239.85, stdev=31.42 00:19:06.647 lat (usec): min=204, max=492, avg=257.16, stdev=32.78 00:19:06.647 clat percentiles (usec): 00:19:06.647 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:19:06.647 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 241], 00:19:06.647 | 70.00th=[ 247], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 306], 00:19:06.647 | 99.00th=[ 334], 99.50th=[ 371], 99.90th=[ 445], 99.95th=[ 445], 00:19:06.647 | 99.99th=[ 445] 00:19:06.647 bw ( KiB/s): min= 4096, max= 4096, per=34.63%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.647 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.647 lat (usec) : 250=43.74%, 500=50.86%, 750=3.33% 00:19:06.647 lat (msec) : 50=2.07% 00:19:06.647 cpu : usr=1.06%, sys=1.83%, ctx=872, majf=0, minf=1 00:19:06.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.647 issued rwts: total=359,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.647 job2: (groupid=0, jobs=1): err= 0: pid=4054944: Mon Jul 15 20:24:44 2024 00:19:06.647 read: IOPS=20, BW=82.1KiB/s (84.1kB/s)(84.0KiB/1023msec) 00:19:06.647 slat (nsec): min=13295, max=34732, avg=20307.71, stdev=8227.71 00:19:06.647 clat (usec): min=40847, max=41019, avg=40964.13, stdev=36.75 00:19:06.647 lat (usec): min=40866, max=41033, avg=40984.43, stdev=34.49 00:19:06.647 clat percentiles (usec): 00:19:06.647 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:06.647 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:06.647 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:06.647 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:06.647 | 99.99th=[41157] 00:19:06.647 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:19:06.647 slat (nsec): min=7963, max=54446, avg=22006.91, stdev=10751.51 00:19:06.647 clat (usec): min=214, max=536, avg=288.95, stdev=60.68 00:19:06.647 lat (usec): min=227, max=570, avg=310.96, stdev=62.86 00:19:06.647 clat percentiles (usec): 00:19:06.647 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:19:06.647 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 265], 60.00th=[ 285], 00:19:06.647 | 70.00th=[ 310], 80.00th=[ 347], 90.00th=[ 379], 95.00th=[ 420], 00:19:06.647 | 99.00th=[ 457], 99.50th=[ 461], 99.90th=[ 537], 99.95th=[ 537], 00:19:06.647 | 99.99th=[ 537] 00:19:06.647 bw ( KiB/s): min= 4096, max= 4096, per=34.63%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.647 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.647 lat (usec) : 250=36.21%, 500=59.66%, 750=0.19% 00:19:06.647 lat (msec) : 50=3.94% 00:19:06.647 cpu : usr=0.68%, sys=0.98%, ctx=534, majf=0, minf=2 00:19:06.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.647 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.647 job3: (groupid=0, jobs=1): err= 0: pid=4054945: Mon Jul 15 20:24:44 2024 00:19:06.647 read: IOPS=186, BW=747KiB/s (765kB/s)(764KiB/1023msec) 00:19:06.647 slat (nsec): min=9324, max=43827, avg=14748.65, stdev=7080.81 00:19:06.647 clat (usec): min=382, max=42378, avg=4166.94, stdev=11640.07 00:19:06.647 lat (usec): min=392, max=42388, avg=4181.68, stdev=11641.86 00:19:06.647 clat percentiles (usec): 00:19:06.647 | 1.00th=[ 388], 5.00th=[ 437], 10.00th=[ 465], 20.00th=[ 482], 00:19:06.647 | 30.00th=[ 494], 40.00th=[ 519], 50.00th=[ 529], 60.00th=[ 545], 00:19:06.647 | 70.00th=[ 562], 80.00th=[ 619], 90.00th=[ 824], 95.00th=[41157], 00:19:06.647 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:06.647 | 99.99th=[42206] 00:19:06.647 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:19:06.647 slat (nsec): min=9364, max=68758, avg=26049.69, stdev=10912.10 00:19:06.647 clat (usec): min=260, max=847, avg=403.38, stdev=75.61 00:19:06.647 lat (usec): min=269, max=865, avg=429.43, stdev=77.86 00:19:06.647 clat percentiles (usec): 00:19:06.647 | 1.00th=[ 269], 5.00th=[ 297], 10.00th=[ 322], 20.00th=[ 347], 00:19:06.647 | 30.00th=[ 363], 40.00th=[ 375], 50.00th=[ 396], 60.00th=[ 412], 00:19:06.647 | 70.00th=[ 429], 80.00th=[ 453], 90.00th=[ 494], 95.00th=[ 537], 00:19:06.647 | 99.00th=[ 652], 99.50th=[ 693], 99.90th=[ 848], 99.95th=[ 848], 00:19:06.647 | 99.99th=[ 848] 00:19:06.647 bw ( KiB/s): min= 4096, max= 4096, per=34.63%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.647 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.647 lat (usec) : 500=75.39%, 750=21.19%, 1000=1.00% 00:19:06.647 lat (msec) : 50=2.42% 00:19:06.647 cpu : usr=0.88%, sys=1.96%, ctx=704, majf=0, minf=1 00:19:06.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.647 issued rwts: total=191,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.647 00:19:06.647 Run status group 0 (all jobs): 00:19:06.647 READ: bw=7007KiB/s (7175kB/s), 82.1KiB/s-4991KiB/s (84.1kB/s-5111kB/s), io=7280KiB (7455kB), run=1001-1039msec 00:19:06.647 WRITE: bw=11.5MiB/s (12.1MB/s), 1971KiB/s-6138KiB/s (2018kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1039msec 00:19:06.647 00:19:06.647 Disk stats (read/write): 00:19:06.647 nvme0n1: ios=1049/1240, merge=0/0, ticks=1487/322, in_queue=1809, util=99.20% 00:19:06.647 nvme0n2: ios=377/512, merge=0/0, ticks=1574/114, in_queue=1688, util=91.07% 00:19:06.647 nvme0n3: ios=40/512, merge=0/0, ticks=1600/153, in_queue=1753, util=94.06% 00:19:06.647 nvme0n4: ios=244/512, merge=0/0, ticks=1260/200, in_queue=1460, util=98.32% 00:19:06.647 20:24:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:06.647 [global] 00:19:06.647 thread=1 00:19:06.647 invalidate=1 00:19:06.647 rw=write 00:19:06.647 time_based=1 00:19:06.647 runtime=1 00:19:06.647 ioengine=libaio 00:19:06.647 direct=1 00:19:06.647 bs=4096 00:19:06.647 iodepth=128 00:19:06.647 norandommap=0 00:19:06.647 numjobs=1 00:19:06.647 00:19:06.647 verify_dump=1 00:19:06.647 verify_backlog=512 00:19:06.647 verify_state_save=0 00:19:06.647 do_verify=1 00:19:06.647 verify=crc32c-intel 00:19:06.647 [job0] 00:19:06.647 filename=/dev/nvme0n1 00:19:06.647 [job1] 00:19:06.647 filename=/dev/nvme0n2 00:19:06.647 [job2] 00:19:06.648 filename=/dev/nvme0n3 00:19:06.648 [job3] 00:19:06.648 filename=/dev/nvme0n4 00:19:06.648 Could not set queue depth (nvme0n1) 00:19:06.648 Could not set queue depth (nvme0n2) 00:19:06.648 Could not set queue depth (nvme0n3) 00:19:06.648 Could not set queue depth (nvme0n4) 00:19:06.648 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.648 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.648 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.648 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.648 fio-3.35 00:19:06.648 Starting 4 threads 00:19:08.019 00:19:08.019 job0: (groupid=0, jobs=1): err= 0: pid=4055169: Mon Jul 15 20:24:46 2024 00:19:08.019 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:19:08.019 slat (usec): min=2, max=15113, avg=123.45, stdev=753.32 00:19:08.019 clat (usec): min=3433, max=43543, avg=15588.24, stdev=6094.48 00:19:08.019 lat (usec): min=3437, max=43559, avg=15711.69, stdev=6151.52 00:19:08.019 clat percentiles (usec): 00:19:08.019 | 1.00th=[ 4883], 5.00th=[ 8029], 10.00th=[10814], 20.00th=[12518], 00:19:08.019 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13698], 60.00th=[14746], 00:19:08.019 | 70.00th=[15795], 80.00th=[17171], 90.00th=[24773], 95.00th=[31851], 00:19:08.019 | 99.00th=[35390], 99.50th=[35390], 99.90th=[40633], 99.95th=[42206], 00:19:08.019 | 99.99th=[43779] 00:19:08.019 write: IOPS=3629, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1003msec); 0 zone resets 00:19:08.019 slat (usec): min=3, max=10559, avg=142.64, stdev=674.93 00:19:08.019 clat (usec): min=588, max=42869, avg=19056.08, stdev=7331.02 00:19:08.019 lat (usec): min=606, max=42876, avg=19198.71, stdev=7381.03 00:19:08.019 clat percentiles (usec): 00:19:08.019 | 1.00th=[ 1582], 5.00th=[ 5997], 10.00th=[ 9896], 20.00th=[13566], 00:19:08.019 | 30.00th=[15270], 40.00th=[18744], 50.00th=[20055], 60.00th=[20841], 00:19:08.019 | 70.00th=[21365], 80.00th=[23200], 90.00th=[25035], 95.00th=[32900], 00:19:08.019 | 99.00th=[38536], 99.50th=[39584], 99.90th=[42730], 99.95th=[42730], 00:19:08.019 | 99.99th=[42730] 00:19:08.019 bw ( KiB/s): min=13240, max=15432, per=31.02%, avg=14336.00, stdev=1549.98, samples=2 00:19:08.019 iops : min= 3310, max= 3858, avg=3584.00, stdev=387.49, samples=2 00:19:08.019 lat (usec) : 750=0.06%, 1000=0.01% 00:19:08.019 lat (msec) : 2=0.93%, 4=1.36%, 10=6.56%, 20=59.33%, 50=31.76% 00:19:08.019 cpu : usr=3.09%, sys=5.89%, ctx=456, majf=0, minf=13 00:19:08.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:08.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.020 issued rwts: total=3584,3640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.020 job1: (groupid=0, jobs=1): err= 0: pid=4055170: Mon Jul 15 20:24:46 2024 00:19:08.020 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:19:08.020 slat (usec): min=2, max=19827, avg=245.54, stdev=1425.88 00:19:08.020 clat (usec): min=7358, max=75769, avg=33281.15, stdev=22434.89 00:19:08.020 lat (usec): min=7363, max=75781, avg=33526.69, stdev=22575.65 00:19:08.020 clat percentiles (usec): 00:19:08.020 | 1.00th=[ 7373], 5.00th=[ 9241], 10.00th=[12780], 20.00th=[14877], 00:19:08.020 | 30.00th=[18220], 40.00th=[20841], 50.00th=[22938], 60.00th=[25297], 00:19:08.020 | 70.00th=[43254], 80.00th=[64226], 90.00th=[70779], 95.00th=[73925], 00:19:08.020 | 99.00th=[74974], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:19:08.020 | 99.99th=[76022] 00:19:08.020 write: IOPS=2216, BW=8865KiB/s (9078kB/s)(8936KiB/1008msec); 0 zone resets 00:19:08.020 slat (usec): min=4, max=13149, avg=214.99, stdev=1195.50 00:19:08.020 clat (usec): min=849, max=59098, avg=26179.37, stdev=12633.00 00:19:08.020 lat (usec): min=855, max=59113, avg=26394.35, stdev=12673.59 00:19:08.020 clat percentiles (usec): 00:19:08.020 | 1.00th=[ 7242], 5.00th=[ 8455], 10.00th=[11731], 20.00th=[16057], 00:19:08.020 | 30.00th=[17957], 40.00th=[20317], 50.00th=[21627], 60.00th=[24511], 00:19:08.020 | 70.00th=[34866], 80.00th=[40633], 90.00th=[44303], 95.00th=[46400], 00:19:08.020 | 99.00th=[58459], 99.50th=[58459], 99.90th=[58983], 99.95th=[58983], 00:19:08.020 | 99.99th=[58983] 00:19:08.020 bw ( KiB/s): min= 4560, max=12288, per=18.23%, avg=8424.00, stdev=5464.52, samples=2 00:19:08.020 iops : min= 1140, max= 3072, avg=2106.00, stdev=1366.13, samples=2 00:19:08.020 lat (usec) : 1000=0.09% 00:19:08.020 lat (msec) : 2=0.35%, 10=7.29%, 20=29.12%, 50=48.30%, 100=14.85% 00:19:08.020 cpu : usr=1.89%, sys=2.98%, ctx=216, majf=0, minf=11 00:19:08.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:08.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.020 issued rwts: total=2048,2234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.020 job2: (groupid=0, jobs=1): err= 0: pid=4055171: Mon Jul 15 20:24:46 2024 00:19:08.020 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:19:08.020 slat (usec): min=3, max=16655, avg=181.56, stdev=1109.77 00:19:08.020 clat (usec): min=11039, max=41656, avg=22154.38, stdev=6347.37 00:19:08.020 lat (usec): min=11047, max=41667, avg=22335.94, stdev=6422.34 00:19:08.020 clat percentiles (usec): 00:19:08.020 | 1.00th=[12649], 5.00th=[14091], 10.00th=[15139], 20.00th=[15533], 00:19:08.020 | 30.00th=[18744], 40.00th=[20055], 50.00th=[21627], 60.00th=[23462], 00:19:08.020 | 70.00th=[24511], 80.00th=[26084], 90.00th=[31065], 95.00th=[34341], 00:19:08.020 | 99.00th=[40633], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:08.020 | 99.99th=[41681] 00:19:08.020 write: IOPS=2705, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1011msec); 0 zone resets 00:19:08.020 slat (usec): min=4, max=17587, avg=187.95, stdev=1015.61 00:19:08.020 clat (usec): min=6517, max=74307, avg=25977.90, stdev=9933.27 00:19:08.020 lat (usec): min=10615, max=74312, avg=26165.85, stdev=9992.72 00:19:08.020 clat percentiles (usec): 00:19:08.020 | 1.00th=[12256], 5.00th=[14877], 10.00th=[16909], 20.00th=[19006], 00:19:08.020 | 30.00th=[19792], 40.00th=[21365], 50.00th=[22938], 60.00th=[25035], 00:19:08.020 | 70.00th=[28967], 80.00th=[32900], 90.00th=[40109], 95.00th=[43779], 00:19:08.020 | 99.00th=[61604], 99.50th=[69731], 99.90th=[73925], 99.95th=[73925], 00:19:08.020 | 99.99th=[73925] 00:19:08.020 bw ( KiB/s): min= 9152, max=11704, per=22.56%, avg=10428.00, stdev=1804.54, samples=2 00:19:08.020 iops : min= 2288, max= 2926, avg=2607.00, stdev=451.13, samples=2 00:19:08.020 lat (msec) : 10=0.02%, 20=37.92%, 50=60.83%, 100=1.23% 00:19:08.020 cpu : usr=3.96%, sys=4.06%, ctx=273, majf=0, minf=13 00:19:08.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:08.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.020 issued rwts: total=2560,2735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.020 job3: (groupid=0, jobs=1): err= 0: pid=4055172: Mon Jul 15 20:24:46 2024 00:19:08.020 read: IOPS=2905, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1008msec) 00:19:08.020 slat (usec): min=3, max=13339, avg=172.36, stdev=963.94 00:19:08.020 clat (usec): min=4621, max=56265, avg=21856.01, stdev=7604.90 00:19:08.020 lat (usec): min=8428, max=56286, avg=22028.37, stdev=7676.05 00:19:08.020 clat percentiles (usec): 00:19:08.020 | 1.00th=[ 9896], 5.00th=[12780], 10.00th=[14353], 20.00th=[16188], 00:19:08.020 | 30.00th=[17957], 40.00th=[19006], 50.00th=[20055], 60.00th=[21627], 00:19:08.020 | 70.00th=[23725], 80.00th=[26870], 90.00th=[30802], 95.00th=[38536], 00:19:08.020 | 99.00th=[49546], 99.50th=[53740], 99.90th=[54264], 99.95th=[55837], 00:19:08.020 | 99.99th=[56361] 00:19:08.020 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:19:08.020 slat (usec): min=4, max=10792, avg=153.10, stdev=841.31 00:19:08.020 clat (usec): min=9036, max=40007, avg=20658.78, stdev=6701.52 00:19:08.020 lat (usec): min=9043, max=40021, avg=20811.88, stdev=6771.45 00:19:08.020 clat percentiles (usec): 00:19:08.020 | 1.00th=[10028], 5.00th=[11994], 10.00th=[13698], 20.00th=[15533], 00:19:08.020 | 30.00th=[16909], 40.00th=[18220], 50.00th=[18744], 60.00th=[19530], 00:19:08.020 | 70.00th=[21890], 80.00th=[25822], 90.00th=[32637], 95.00th=[34341], 00:19:08.020 | 99.00th=[38011], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:19:08.020 | 99.99th=[40109] 00:19:08.020 bw ( KiB/s): min=12288, max=12288, per=26.59%, avg=12288.00, stdev= 0.00, samples=2 00:19:08.020 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:19:08.020 lat (msec) : 10=1.07%, 20=55.42%, 50=43.08%, 100=0.43% 00:19:08.020 cpu : usr=3.87%, sys=5.36%, ctx=284, majf=0, minf=13 00:19:08.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:08.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.020 issued rwts: total=2929,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.020 00:19:08.020 Run status group 0 (all jobs): 00:19:08.020 READ: bw=43.0MiB/s (45.1MB/s), 8127KiB/s-14.0MiB/s (8322kB/s-14.6MB/s), io=43.4MiB (45.6MB), run=1003-1011msec 00:19:08.020 WRITE: bw=45.1MiB/s (47.3MB/s), 8865KiB/s-14.2MiB/s (9078kB/s-14.9MB/s), io=45.6MiB (47.8MB), run=1003-1011msec 00:19:08.020 00:19:08.020 Disk stats (read/write): 00:19:08.020 nvme0n1: ios=3115/3072, merge=0/0, ticks=25280/29621, in_queue=54901, util=97.90% 00:19:08.020 nvme0n2: ios=1832/2048, merge=0/0, ticks=15540/14242, in_queue=29782, util=95.74% 00:19:08.020 nvme0n3: ios=2106/2480, merge=0/0, ticks=23256/28336, in_queue=51592, util=99.06% 00:19:08.020 nvme0n4: ios=2395/2560, merge=0/0, ticks=26375/25721, in_queue=52096, util=89.68% 00:19:08.020 20:24:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:08.020 [global] 00:19:08.020 thread=1 00:19:08.020 invalidate=1 00:19:08.020 rw=randwrite 00:19:08.020 time_based=1 00:19:08.020 runtime=1 00:19:08.020 ioengine=libaio 00:19:08.020 direct=1 00:19:08.020 bs=4096 00:19:08.020 iodepth=128 00:19:08.020 norandommap=0 00:19:08.020 numjobs=1 00:19:08.020 00:19:08.020 verify_dump=1 00:19:08.020 verify_backlog=512 00:19:08.020 verify_state_save=0 00:19:08.020 do_verify=1 00:19:08.020 verify=crc32c-intel 00:19:08.020 [job0] 00:19:08.020 filename=/dev/nvme0n1 00:19:08.020 [job1] 00:19:08.020 filename=/dev/nvme0n2 00:19:08.021 [job2] 00:19:08.021 filename=/dev/nvme0n3 00:19:08.021 [job3] 00:19:08.021 filename=/dev/nvme0n4 00:19:08.021 Could not set queue depth (nvme0n1) 00:19:08.021 Could not set queue depth (nvme0n2) 00:19:08.021 Could not set queue depth (nvme0n3) 00:19:08.021 Could not set queue depth (nvme0n4) 00:19:08.279 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.279 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.279 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.279 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.279 fio-3.35 00:19:08.279 Starting 4 threads 00:19:09.648 00:19:09.648 job0: (groupid=0, jobs=1): err= 0: pid=4055442: Mon Jul 15 20:24:47 2024 00:19:09.648 read: IOPS=2827, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1005msec) 00:19:09.648 slat (usec): min=3, max=14981, avg=160.91, stdev=789.17 00:19:09.648 clat (usec): min=4228, max=50481, avg=19355.40, stdev=8467.22 00:19:09.648 lat (usec): min=7638, max=50488, avg=19516.31, stdev=8535.47 00:19:09.648 clat percentiles (usec): 00:19:09.648 | 1.00th=[ 7898], 5.00th=[10290], 10.00th=[11207], 20.00th=[11600], 00:19:09.648 | 30.00th=[13304], 40.00th=[14746], 50.00th=[16909], 60.00th=[20055], 00:19:09.648 | 70.00th=[21890], 80.00th=[25822], 90.00th=[31851], 95.00th=[35390], 00:19:09.648 | 99.00th=[40109], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:19:09.648 | 99.99th=[50594] 00:19:09.648 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:19:09.648 slat (usec): min=4, max=16702, avg=164.69, stdev=740.08 00:19:09.648 clat (usec): min=8090, max=47582, avg=23259.77, stdev=8817.43 00:19:09.648 lat (usec): min=8098, max=47603, avg=23424.46, stdev=8891.80 00:19:09.648 clat percentiles (usec): 00:19:09.648 | 1.00th=[10945], 5.00th=[12256], 10.00th=[13566], 20.00th=[14484], 00:19:09.648 | 30.00th=[16581], 40.00th=[20055], 50.00th=[21890], 60.00th=[23725], 00:19:09.648 | 70.00th=[26346], 80.00th=[30016], 90.00th=[38011], 95.00th=[41157], 00:19:09.648 | 99.00th=[43779], 99.50th=[44303], 99.90th=[45351], 99.95th=[47449], 00:19:09.648 | 99.99th=[47449] 00:19:09.648 bw ( KiB/s): min=12288, max=12288, per=23.16%, avg=12288.00, stdev= 0.00, samples=2 00:19:09.648 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:19:09.648 lat (msec) : 10=1.79%, 20=47.46%, 50=50.36%, 100=0.39% 00:19:09.648 cpu : usr=5.18%, sys=6.97%, ctx=502, majf=0, minf=1 00:19:09.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:09.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.648 issued rwts: total=2842,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.648 job1: (groupid=0, jobs=1): err= 0: pid=4055463: Mon Jul 15 20:24:47 2024 00:19:09.648 read: IOPS=3091, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1010msec) 00:19:09.648 slat (usec): min=3, max=15404, avg=134.76, stdev=966.11 00:19:09.648 clat (usec): min=1423, max=111026, avg=15717.72, stdev=14111.57 00:19:09.648 lat (usec): min=1428, max=111057, avg=15852.48, stdev=14268.51 00:19:09.648 clat percentiles (msec): 00:19:09.648 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 8], 00:19:09.648 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:19:09.648 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 28], 95.00th=[ 42], 00:19:09.648 | 99.00th=[ 78], 99.50th=[ 95], 99.90th=[ 111], 99.95th=[ 111], 00:19:09.648 | 99.99th=[ 111] 00:19:09.648 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:19:09.648 slat (usec): min=3, max=11715, avg=128.22, stdev=658.63 00:19:09.648 clat (usec): min=371, max=111075, avg=22022.74, stdev=20914.28 00:19:09.648 lat (usec): min=386, max=111095, avg=22150.97, stdev=21018.07 00:19:09.648 clat percentiles (usec): 00:19:09.648 | 1.00th=[ 1012], 5.00th=[ 2933], 10.00th=[ 5342], 20.00th=[ 6718], 00:19:09.648 | 30.00th=[ 9765], 40.00th=[ 11469], 50.00th=[ 12649], 60.00th=[ 17957], 00:19:09.648 | 70.00th=[ 23200], 80.00th=[ 39584], 90.00th=[ 49021], 95.00th=[ 55837], 00:19:09.648 | 99.00th=[107480], 99.50th=[107480], 99.90th=[109577], 99.95th=[110625], 00:19:09.648 | 99.99th=[110625] 00:19:09.648 bw ( KiB/s): min=12288, max=15768, per=26.43%, avg=14028.00, stdev=2460.73, samples=2 00:19:09.648 iops : min= 3072, max= 3942, avg=3507.00, stdev=615.18, samples=2 00:19:09.648 lat (usec) : 500=0.03%, 750=0.09%, 1000=0.36% 00:19:09.648 lat (msec) : 2=1.97%, 4=4.49%, 10=21.70%, 20=41.96%, 50=22.61% 00:19:09.648 lat (msec) : 100=5.56%, 250=1.24% 00:19:09.648 cpu : usr=4.66%, sys=7.04%, ctx=398, majf=0, minf=1 00:19:09.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:09.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.648 issued rwts: total=3122,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.648 job2: (groupid=0, jobs=1): err= 0: pid=4055497: Mon Jul 15 20:24:47 2024 00:19:09.648 read: IOPS=3229, BW=12.6MiB/s (13.2MB/s)(13.3MiB/1052msec) 00:19:09.648 slat (usec): min=3, max=16749, avg=141.85, stdev=974.93 00:19:09.648 clat (usec): min=1358, max=80997, avg=20117.89, stdev=13240.61 00:19:09.648 lat (usec): min=1374, max=82547, avg=20259.74, stdev=13304.71 00:19:09.648 clat percentiles (usec): 00:19:09.648 | 1.00th=[ 4817], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[12649], 00:19:09.648 | 30.00th=[13960], 40.00th=[14746], 50.00th=[16188], 60.00th=[17695], 00:19:09.648 | 70.00th=[19530], 80.00th=[23200], 90.00th=[31851], 95.00th=[50070], 00:19:09.648 | 99.00th=[80217], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:19:09.648 | 99.99th=[81265] 00:19:09.648 write: IOPS=3406, BW=13.3MiB/s (14.0MB/s)(14.0MiB/1052msec); 0 zone resets 00:19:09.648 slat (usec): min=4, max=12979, avg=134.57, stdev=836.03 00:19:09.648 clat (usec): min=3720, max=47031, avg=17906.26, stdev=6916.45 00:19:09.648 lat (usec): min=3728, max=47040, avg=18040.84, stdev=6973.88 00:19:09.648 clat percentiles (usec): 00:19:09.648 | 1.00th=[ 7242], 5.00th=[ 8094], 10.00th=[11469], 20.00th=[12780], 00:19:09.648 | 30.00th=[13566], 40.00th=[14353], 50.00th=[16188], 60.00th=[19792], 00:19:09.648 | 70.00th=[21890], 80.00th=[22938], 90.00th=[23987], 95.00th=[27132], 00:19:09.648 | 99.00th=[45351], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:19:09.648 | 99.99th=[46924] 00:19:09.648 bw ( KiB/s): min=12296, max=16376, per=27.01%, avg=14336.00, stdev=2885.00, samples=2 00:19:09.648 iops : min= 3074, max= 4094, avg=3584.00, stdev=721.25, samples=2 00:19:09.648 lat (msec) : 2=0.03%, 4=0.09%, 10=6.42%, 20=60.32%, 50=30.40% 00:19:09.648 lat (msec) : 100=2.75% 00:19:09.648 cpu : usr=5.33%, sys=6.18%, ctx=260, majf=0, minf=1 00:19:09.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:09.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.648 issued rwts: total=3397,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.648 job3: (groupid=0, jobs=1): err= 0: pid=4055514: Mon Jul 15 20:24:47 2024 00:19:09.648 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:19:09.648 slat (usec): min=3, max=21581, avg=140.39, stdev=878.74 00:19:09.648 clat (usec): min=7655, max=53531, avg=17753.30, stdev=5884.99 00:19:09.648 lat (usec): min=7663, max=53545, avg=17893.68, stdev=5953.45 00:19:09.648 clat percentiles (usec): 00:19:09.648 | 1.00th=[10028], 5.00th=[11863], 10.00th=[12649], 20.00th=[13698], 00:19:09.648 | 30.00th=[14353], 40.00th=[15270], 50.00th=[16450], 60.00th=[17957], 00:19:09.648 | 70.00th=[19006], 80.00th=[20841], 90.00th=[22938], 95.00th=[28443], 00:19:09.648 | 99.00th=[44827], 99.50th=[49546], 99.90th=[53740], 99.95th=[53740], 00:19:09.648 | 99.99th=[53740] 00:19:09.648 write: IOPS=3669, BW=14.3MiB/s (15.0MB/s)(14.5MiB/1013msec); 0 zone resets 00:19:09.648 slat (usec): min=4, max=9493, avg=121.99, stdev=575.80 00:19:09.648 clat (usec): min=2483, max=53474, avg=17438.71, stdev=8403.03 00:19:09.648 lat (usec): min=2510, max=53512, avg=17560.70, stdev=8443.10 00:19:09.648 clat percentiles (usec): 00:19:09.648 | 1.00th=[ 5538], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[11469], 00:19:09.648 | 30.00th=[12256], 40.00th=[13698], 50.00th=[14877], 60.00th=[16057], 00:19:09.648 | 70.00th=[20579], 80.00th=[23987], 90.00th=[28705], 95.00th=[36963], 00:19:09.648 | 99.00th=[43779], 99.50th=[44827], 99.90th=[53216], 99.95th=[53216], 00:19:09.648 | 99.99th=[53216] 00:19:09.648 bw ( KiB/s): min=13216, max=15568, per=27.12%, avg=14392.00, stdev=1663.12, samples=2 00:19:09.648 iops : min= 3304, max= 3892, avg=3598.00, stdev=415.78, samples=2 00:19:09.648 lat (msec) : 4=0.04%, 10=7.83%, 20=64.40%, 50=27.48%, 100=0.25% 00:19:09.648 cpu : usr=5.63%, sys=8.00%, ctx=393, majf=0, minf=1 00:19:09.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:09.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.648 issued rwts: total=3584,3717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.648 00:19:09.648 Run status group 0 (all jobs): 00:19:09.648 READ: bw=48.1MiB/s (50.4MB/s), 11.0MiB/s-13.8MiB/s (11.6MB/s-14.5MB/s), io=50.6MiB (53.0MB), run=1005-1052msec 00:19:09.648 WRITE: bw=51.8MiB/s (54.3MB/s), 11.9MiB/s-14.3MiB/s (12.5MB/s-15.0MB/s), io=54.5MiB (57.2MB), run=1005-1052msec 00:19:09.648 00:19:09.648 Disk stats (read/write): 00:19:09.648 nvme0n1: ios=2417/2560, merge=0/0, ticks=15594/18534, in_queue=34128, util=90.88% 00:19:09.648 nvme0n2: ios=2591/2695, merge=0/0, ticks=44003/58118, in_queue=102121, util=97.87% 00:19:09.648 nvme0n3: ios=2647/3072, merge=0/0, ticks=34072/37845, in_queue=71917, util=88.78% 00:19:09.649 nvme0n4: ios=2982/3072, merge=0/0, ticks=46354/49745, in_queue=96099, util=89.63% 00:19:09.649 20:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:09.649 20:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4055660 00:19:09.649 20:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:09.649 20:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:09.649 [global] 00:19:09.649 thread=1 00:19:09.649 invalidate=1 00:19:09.649 rw=read 00:19:09.649 time_based=1 00:19:09.649 runtime=10 00:19:09.649 ioengine=libaio 00:19:09.649 direct=1 00:19:09.649 bs=4096 00:19:09.649 iodepth=1 00:19:09.649 norandommap=1 00:19:09.649 numjobs=1 00:19:09.649 00:19:09.649 [job0] 00:19:09.649 filename=/dev/nvme0n1 00:19:09.649 [job1] 00:19:09.649 filename=/dev/nvme0n2 00:19:09.649 [job2] 00:19:09.649 filename=/dev/nvme0n3 00:19:09.649 [job3] 00:19:09.649 filename=/dev/nvme0n4 00:19:09.649 Could not set queue depth (nvme0n1) 00:19:09.649 Could not set queue depth (nvme0n2) 00:19:09.649 Could not set queue depth (nvme0n3) 00:19:09.649 Could not set queue depth (nvme0n4) 00:19:09.649 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.649 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.649 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.649 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.649 fio-3.35 00:19:09.649 Starting 4 threads 00:19:12.929 20:24:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:12.929 20:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:12.929 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=30511104, buflen=4096 00:19:12.929 fio: pid=4055754, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:12.929 20:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:12.929 20:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:12.929 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1961984, buflen=4096 00:19:12.929 fio: pid=4055753, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:13.187 20:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.187 20:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:13.187 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=7315456, buflen=4096 00:19:13.187 fio: pid=4055749, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:13.446 20:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.446 20:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:13.446 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11202560, buflen=4096 00:19:13.446 fio: pid=4055750, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:13.446 00:19:13.446 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4055749: Mon Jul 15 20:24:51 2024 00:19:13.446 read: IOPS=518, BW=2071KiB/s (2121kB/s)(7144KiB/3449msec) 00:19:13.446 slat (usec): min=5, max=838, avg=23.72, stdev=22.00 00:19:13.446 clat (usec): min=305, max=42347, avg=1889.00, stdev=7654.06 00:19:13.446 lat (usec): min=319, max=42361, avg=1912.72, stdev=7656.25 00:19:13.446 clat percentiles (usec): 00:19:13.446 | 1.00th=[ 318], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 355], 00:19:13.446 | 30.00th=[ 371], 40.00th=[ 383], 50.00th=[ 396], 60.00th=[ 404], 00:19:13.446 | 70.00th=[ 424], 80.00th=[ 449], 90.00th=[ 498], 95.00th=[ 570], 00:19:13.446 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:13.446 | 99.99th=[42206] 00:19:13.446 bw ( KiB/s): min= 96, max= 9088, per=17.63%, avg=2366.67, stdev=3674.96, samples=6 00:19:13.446 iops : min= 24, max= 2272, avg=591.67, stdev=918.74, samples=6 00:19:13.446 lat (usec) : 500=90.15%, 750=6.16% 00:19:13.446 lat (msec) : 50=3.64% 00:19:13.446 cpu : usr=0.58%, sys=1.36%, ctx=1788, majf=0, minf=1 00:19:13.446 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.446 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.446 issued rwts: total=1787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.446 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.446 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4055750: Mon Jul 15 20:24:51 2024 00:19:13.446 read: IOPS=737, BW=2948KiB/s (3019kB/s)(10.7MiB/3711msec) 00:19:13.446 slat (usec): min=5, max=3800, avg=14.10, stdev=72.65 00:19:13.446 clat (usec): min=310, max=42417, avg=1331.05, stdev=6108.33 00:19:13.446 lat (usec): min=321, max=44999, avg=1345.14, stdev=6119.08 00:19:13.446 clat percentiles (usec): 00:19:13.446 | 1.00th=[ 330], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:19:13.446 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 388], 00:19:13.446 | 70.00th=[ 392], 80.00th=[ 400], 90.00th=[ 412], 95.00th=[ 498], 00:19:13.446 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:13.446 | 99.99th=[42206] 00:19:13.446 bw ( KiB/s): min= 93, max= 9880, per=23.24%, avg=3119.57, stdev=4448.36, samples=7 00:19:13.446 iops : min= 23, max= 2470, avg=779.86, stdev=1112.12, samples=7 00:19:13.446 lat (usec) : 500=95.03%, 750=2.30%, 1000=0.18% 00:19:13.446 lat (msec) : 2=0.04%, 4=0.04%, 10=0.04%, 50=2.34% 00:19:13.446 cpu : usr=0.73%, sys=1.40%, ctx=2738, majf=0, minf=1 00:19:13.446 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.446 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.446 issued rwts: total=2736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.446 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.446 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4055753: Mon Jul 15 20:24:51 2024 00:19:13.446 read: IOPS=150, BW=601KiB/s (615kB/s)(1916KiB/3189msec) 00:19:13.446 slat (nsec): min=7385, max=43651, avg=14589.99, stdev=6615.48 00:19:13.446 clat (usec): min=319, max=41435, avg=6591.48, stdev=14480.51 00:19:13.446 lat (usec): min=327, max=41447, avg=6606.08, stdev=14484.45 00:19:13.446 clat percentiles (usec): 00:19:13.446 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 351], 20.00th=[ 498], 00:19:13.446 | 30.00th=[ 519], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 545], 00:19:13.446 | 70.00th=[ 562], 80.00th=[ 594], 90.00th=[41157], 95.00th=[41157], 00:19:13.446 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:13.446 | 99.99th=[41681] 00:19:13.446 bw ( KiB/s): min= 96, max= 2496, per=4.71%, avg=632.00, stdev=965.82, samples=6 00:19:13.446 iops : min= 24, max= 624, avg=158.00, stdev=241.46, samples=6 00:19:13.446 lat (usec) : 500=20.21%, 750=64.58% 00:19:13.446 lat (msec) : 50=15.00% 00:19:13.446 cpu : usr=0.13%, sys=0.25%, ctx=483, majf=0, minf=1 00:19:13.446 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.446 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.446 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.446 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.446 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4055754: Mon Jul 15 20:24:51 2024 00:19:13.446 read: IOPS=2558, BW=9.99MiB/s (10.5MB/s)(29.1MiB/2912msec) 00:19:13.446 slat (nsec): min=5127, max=64085, avg=12899.79, stdev=7029.98 00:19:13.446 clat (usec): min=292, max=3256, avg=371.71, stdev=59.58 00:19:13.446 lat (usec): min=298, max=3264, avg=384.61, stdev=63.39 00:19:13.446 clat percentiles (usec): 00:19:13.446 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 330], 00:19:13.446 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 375], 00:19:13.446 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 474], 00:19:13.446 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 635], 99.95th=[ 644], 00:19:13.446 | 99.99th=[ 3261] 00:19:13.446 bw ( KiB/s): min= 9032, max=11888, per=76.75%, avg=10299.20, stdev=1082.91, samples=5 00:19:13.446 iops : min= 2258, max= 2972, avg=2574.80, stdev=270.73, samples=5 00:19:13.446 lat (usec) : 500=96.52%, 750=3.44%, 1000=0.01% 00:19:13.446 lat (msec) : 4=0.01% 00:19:13.446 cpu : usr=1.48%, sys=5.74%, ctx=7452, majf=0, minf=1 00:19:13.446 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.446 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.446 issued rwts: total=7450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.446 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.446 00:19:13.446 Run status group 0 (all jobs): 00:19:13.446 READ: bw=13.1MiB/s (13.7MB/s), 601KiB/s-9.99MiB/s (615kB/s-10.5MB/s), io=48.6MiB (51.0MB), run=2912-3711msec 00:19:13.446 00:19:13.446 Disk stats (read/write): 00:19:13.446 nvme0n1: ios=1784/0, merge=0/0, ticks=3250/0, in_queue=3250, util=95.88% 00:19:13.446 nvme0n2: ios=2732/0, merge=0/0, ticks=3481/0, in_queue=3481, util=96.49% 00:19:13.446 nvme0n3: ios=526/0, merge=0/0, ticks=4153/0, in_queue=4153, util=99.22% 00:19:13.446 nvme0n4: ios=7344/0, merge=0/0, ticks=2621/0, in_queue=2621, util=96.71% 00:19:13.704 20:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.704 20:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:13.963 20:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.963 20:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:14.220 20:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.220 20:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:14.478 20:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.478 20:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:14.736 20:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:14.736 20:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 4055660 00:19:14.736 20:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:14.736 20:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:14.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:14.992 20:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:14.992 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:14.992 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:14.992 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:14.992 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:14.992 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:14.992 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:14.992 20:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:14.992 20:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:14.992 nvmf hotplug test: fio failed as expected 00:19:14.992 20:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:15.249 rmmod nvme_tcp 00:19:15.249 rmmod nvme_fabrics 00:19:15.249 rmmod nvme_keyring 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 4053639 ']' 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 4053639 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 4053639 ']' 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 4053639 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4053639 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:15.249 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4053639' 00:19:15.249 killing process with pid 4053639 00:19:15.250 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 4053639 00:19:15.250 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 4053639 00:19:15.541 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:15.541 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:15.541 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:15.541 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:15.541 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:15.541 20:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.541 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:15.541 20:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.443 20:24:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:17.443 00:19:17.443 real 0m23.561s 00:19:17.443 user 1m21.482s 00:19:17.443 sys 0m6.851s 00:19:17.443 20:24:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:17.443 20:24:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.443 ************************************ 00:19:17.443 END TEST nvmf_fio_target 00:19:17.443 ************************************ 00:19:17.443 20:24:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:17.443 20:24:55 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:17.443 20:24:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:17.443 20:24:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:17.443 20:24:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:17.702 ************************************ 00:19:17.702 START TEST nvmf_bdevio 00:19:17.702 ************************************ 00:19:17.702 20:24:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:17.702 * Looking for test storage... 00:19:17.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.702 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:17.703 20:24:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:19.607 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:19.607 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:19.607 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:19.607 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:19.607 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:19.608 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:19.608 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:19.608 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.608 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:19.608 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:19.608 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:19.608 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:19.608 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:19.608 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:19.608 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:19.608 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.608 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.608 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:19.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:19:19.866 00:19:19.866 --- 10.0.0.2 ping statistics --- 00:19:19.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.866 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:19:19.866 00:19:19.866 --- 10.0.0.1 ping statistics --- 00:19:19.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.866 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:19.866 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:19.867 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=4058367 00:19:19.867 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:19.867 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 4058367 00:19:19.867 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 4058367 ']' 00:19:19.867 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.867 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.867 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.867 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.867 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:19.867 [2024-07-15 20:24:58.232359] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:19:19.867 [2024-07-15 20:24:58.232444] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.867 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.867 [2024-07-15 20:24:58.300947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:19.867 [2024-07-15 20:24:58.395450] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.867 [2024-07-15 20:24:58.395513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.867 [2024-07-15 20:24:58.395530] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.867 [2024-07-15 20:24:58.395543] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.867 [2024-07-15 20:24:58.395555] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.867 [2024-07-15 20:24:58.396085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:19.867 [2024-07-15 20:24:58.396122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:19.867 [2024-07-15 20:24:58.396185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:19.867 [2024-07-15 20:24:58.396188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:20.126 [2024-07-15 20:24:58.552788] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:20.126 Malloc0 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:20.126 [2024-07-15 20:24:58.606304] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.126 { 00:19:20.126 "params": { 00:19:20.126 "name": "Nvme$subsystem", 00:19:20.126 "trtype": "$TEST_TRANSPORT", 00:19:20.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.126 "adrfam": "ipv4", 00:19:20.126 "trsvcid": "$NVMF_PORT", 00:19:20.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.126 "hdgst": ${hdgst:-false}, 00:19:20.126 "ddgst": ${ddgst:-false} 00:19:20.126 }, 00:19:20.126 "method": "bdev_nvme_attach_controller" 00:19:20.126 } 00:19:20.126 EOF 00:19:20.126 )") 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:20.126 20:24:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:20.126 "params": { 00:19:20.126 "name": "Nvme1", 00:19:20.126 "trtype": "tcp", 00:19:20.126 "traddr": "10.0.0.2", 00:19:20.126 "adrfam": "ipv4", 00:19:20.126 "trsvcid": "4420", 00:19:20.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.126 "hdgst": false, 00:19:20.126 "ddgst": false 00:19:20.126 }, 00:19:20.126 "method": "bdev_nvme_attach_controller" 00:19:20.126 }' 00:19:20.126 [2024-07-15 20:24:58.652984] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:19:20.126 [2024-07-15 20:24:58.653069] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058395 ] 00:19:20.385 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.385 [2024-07-15 20:24:58.713111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:20.385 [2024-07-15 20:24:58.805171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.385 [2024-07-15 20:24:58.805222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.385 [2024-07-15 20:24:58.805225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.643 I/O targets: 00:19:20.643 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:20.643 00:19:20.643 00:19:20.643 CUnit - A unit testing framework for C - Version 2.1-3 00:19:20.643 http://cunit.sourceforge.net/ 00:19:20.643 00:19:20.643 00:19:20.643 Suite: bdevio tests on: Nvme1n1 00:19:20.901 Test: blockdev write read block ...passed 00:19:20.901 Test: blockdev write zeroes read block ...passed 00:19:20.901 Test: blockdev write zeroes read no split ...passed 00:19:20.901 Test: blockdev write zeroes read split ...passed 00:19:20.901 Test: blockdev write zeroes read split partial ...passed 00:19:20.901 Test: blockdev reset ...[2024-07-15 20:24:59.355766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:20.901 [2024-07-15 20:24:59.355882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23eda60 (9): Bad file descriptor 00:19:21.159 [2024-07-15 20:24:59.458920] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:21.159 passed 00:19:21.159 Test: blockdev write read 8 blocks ...passed 00:19:21.159 Test: blockdev write read size > 128k ...passed 00:19:21.159 Test: blockdev write read invalid size ...passed 00:19:21.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:21.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:21.159 Test: blockdev write read max offset ...passed 00:19:21.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:21.159 Test: blockdev writev readv 8 blocks ...passed 00:19:21.159 Test: blockdev writev readv 30 x 1block ...passed 00:19:21.159 Test: blockdev writev readv block ...passed 00:19:21.159 Test: blockdev writev readv size > 128k ...passed 00:19:21.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:21.159 Test: blockdev comparev and writev ...[2024-07-15 20:24:59.676740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.159 [2024-07-15 20:24:59.676776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:21.159 [2024-07-15 20:24:59.676801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.159 [2024-07-15 20:24:59.676827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.159 [2024-07-15 20:24:59.677199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.159 [2024-07-15 20:24:59.677223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:21.159 [2024-07-15 20:24:59.677244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.159 [2024-07-15 20:24:59.677260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:21.159 [2024-07-15 20:24:59.677612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.159 [2024-07-15 20:24:59.677635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:21.159 [2024-07-15 20:24:59.677657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.159 [2024-07-15 20:24:59.677672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:21.159 [2024-07-15 20:24:59.678039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.159 [2024-07-15 20:24:59.678063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:21.160 [2024-07-15 20:24:59.678084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.160 [2024-07-15 20:24:59.678100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:21.418 passed 00:19:21.418 Test: blockdev nvme passthru rw ...passed 00:19:21.418 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:24:59.762245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.418 [2024-07-15 20:24:59.762272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:21.418 [2024-07-15 20:24:59.762477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.418 [2024-07-15 20:24:59.762500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:21.418 [2024-07-15 20:24:59.762697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.418 [2024-07-15 20:24:59.762719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:21.418 [2024-07-15 20:24:59.762931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.418 [2024-07-15 20:24:59.762954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:21.418 passed 00:19:21.418 Test: blockdev nvme admin passthru ...passed 00:19:21.418 Test: blockdev copy ...passed 00:19:21.418 00:19:21.418 Run Summary: Type Total Ran Passed Failed Inactive 00:19:21.418 suites 1 1 n/a 0 0 00:19:21.418 tests 23 23 23 0 0 00:19:21.418 asserts 152 152 152 0 n/a 00:19:21.418 00:19:21.418 Elapsed time = 1.340 seconds 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:21.676 rmmod nvme_tcp 00:19:21.676 rmmod nvme_fabrics 00:19:21.676 rmmod nvme_keyring 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 4058367 ']' 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 4058367 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 4058367 ']' 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 4058367 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4058367 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4058367' 00:19:21.676 killing process with pid 4058367 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 4058367 00:19:21.676 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 4058367 00:19:21.936 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:21.936 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:21.936 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:21.936 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.936 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:21.936 20:25:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.936 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.936 20:25:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.468 20:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:24.468 00:19:24.468 real 0m6.423s 00:19:24.468 user 0m11.210s 00:19:24.468 sys 0m2.043s 00:19:24.468 20:25:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:24.468 20:25:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:24.468 ************************************ 00:19:24.468 END TEST nvmf_bdevio 00:19:24.468 ************************************ 00:19:24.468 20:25:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:24.468 20:25:02 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:24.468 20:25:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:24.468 20:25:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:24.468 20:25:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:24.468 ************************************ 00:19:24.468 START TEST nvmf_auth_target 00:19:24.468 ************************************ 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:24.468 * Looking for test storage... 00:19:24.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:24.468 20:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:26.383 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:26.383 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:26.383 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:26.383 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.383 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:26.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:19:26.384 00:19:26.384 --- 10.0.0.2 ping statistics --- 00:19:26.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.384 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:19:26.384 00:19:26.384 --- 10.0.0.1 ping statistics --- 00:19:26.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.384 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4060538 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4060538 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4060538 ']' 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.384 20:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=4060619 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a0908f4754e2fdbcea8406a57079c93b526ca8308f5f19e2 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.8Bb 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a0908f4754e2fdbcea8406a57079c93b526ca8308f5f19e2 0 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a0908f4754e2fdbcea8406a57079c93b526ca8308f5f19e2 0 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a0908f4754e2fdbcea8406a57079c93b526ca8308f5f19e2 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.8Bb 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.8Bb 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.8Bb 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=56fb628ab0f22c708a6b9250acc13e4a59ae7bb2f84a0828eaf3cc1d40d57d58 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Alc 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 56fb628ab0f22c708a6b9250acc13e4a59ae7bb2f84a0828eaf3cc1d40d57d58 3 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 56fb628ab0f22c708a6b9250acc13e4a59ae7bb2f84a0828eaf3cc1d40d57d58 3 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=56fb628ab0f22c708a6b9250acc13e4a59ae7bb2f84a0828eaf3cc1d40d57d58 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:26.642 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Alc 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Alc 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Alc 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=17ea2a02a7026c30bb175b05af0371ef 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.CJa 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 17ea2a02a7026c30bb175b05af0371ef 1 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 17ea2a02a7026c30bb175b05af0371ef 1 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=17ea2a02a7026c30bb175b05af0371ef 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.CJa 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.CJa 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.CJa 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=30264ed4e65b7a2994679565ecd61f79a8c21a540fd86371 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ApJ 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 30264ed4e65b7a2994679565ecd61f79a8c21a540fd86371 2 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 30264ed4e65b7a2994679565ecd61f79a8c21a540fd86371 2 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=30264ed4e65b7a2994679565ecd61f79a8c21a540fd86371 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ApJ 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ApJ 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ApJ 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=915c341e521a72efe72c2cdbd3bfcdf14241407d665731bb 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.nlz 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 915c341e521a72efe72c2cdbd3bfcdf14241407d665731bb 2 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 915c341e521a72efe72c2cdbd3bfcdf14241407d665731bb 2 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=915c341e521a72efe72c2cdbd3bfcdf14241407d665731bb 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.nlz 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.nlz 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.nlz 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8f0351734f8a7654bad08f1f6888d174 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.oOs 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8f0351734f8a7654bad08f1f6888d174 1 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8f0351734f8a7654bad08f1f6888d174 1 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8f0351734f8a7654bad08f1f6888d174 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.oOs 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.oOs 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.oOs 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5594739cd371e07bef85da3a0163002c4ed3db1b0a1403248bb17bd44c7aa425 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.RMq 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5594739cd371e07bef85da3a0163002c4ed3db1b0a1403248bb17bd44c7aa425 3 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5594739cd371e07bef85da3a0163002c4ed3db1b0a1403248bb17bd44c7aa425 3 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5594739cd371e07bef85da3a0163002c4ed3db1b0a1403248bb17bd44c7aa425 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:26.901 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:27.159 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.RMq 00:19:27.159 20:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.RMq 00:19:27.159 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.RMq 00:19:27.159 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:27.159 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 4060538 00:19:27.159 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4060538 ']' 00:19:27.159 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.159 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.159 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.159 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.159 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.415 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.415 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:27.415 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 4060619 /var/tmp/host.sock 00:19:27.415 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4060619 ']' 00:19:27.415 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:27.415 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.415 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:27.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:27.415 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.415 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.672 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.672 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:27.672 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:27.672 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.672 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.672 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.672 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:27.672 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8Bb 00:19:27.672 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.672 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.672 20:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.672 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.8Bb 00:19:27.672 20:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.8Bb 00:19:27.929 20:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Alc ]] 00:19:27.929 20:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Alc 00:19:27.929 20:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.929 20:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.929 20:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.929 20:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Alc 00:19:27.929 20:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Alc 00:19:28.186 20:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:28.186 20:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.CJa 00:19:28.186 20:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.186 20:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.186 20:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.186 20:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.CJa 00:19:28.186 20:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.CJa 00:19:28.443 20:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ApJ ]] 00:19:28.443 20:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ApJ 00:19:28.443 20:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.443 20:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.443 20:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.443 20:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ApJ 00:19:28.443 20:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ApJ 00:19:28.699 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:28.699 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nlz 00:19:28.699 20:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.699 20:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.699 20:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.699 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.nlz 00:19:28.699 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.nlz 00:19:28.956 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.oOs ]] 00:19:28.956 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oOs 00:19:28.956 20:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.956 20:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.956 20:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.956 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oOs 00:19:28.956 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oOs 00:19:29.213 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:29.213 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.RMq 00:19:29.213 20:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.213 20:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.213 20:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.213 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.RMq 00:19:29.213 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.RMq 00:19:29.470 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:29.470 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:29.470 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.470 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.470 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.470 20:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.727 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:29.727 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.727 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.727 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:29.727 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:29.727 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.727 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.727 20:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.727 20:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.727 20:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.727 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.727 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.985 00:19:29.985 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.985 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.985 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.242 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.242 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.242 20:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.242 20:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.242 20:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.242 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.242 { 00:19:30.242 "cntlid": 1, 00:19:30.242 "qid": 0, 00:19:30.242 "state": "enabled", 00:19:30.242 "thread": "nvmf_tgt_poll_group_000", 00:19:30.242 "listen_address": { 00:19:30.242 "trtype": "TCP", 00:19:30.242 "adrfam": "IPv4", 00:19:30.242 "traddr": "10.0.0.2", 00:19:30.242 "trsvcid": "4420" 00:19:30.242 }, 00:19:30.242 "peer_address": { 00:19:30.242 "trtype": "TCP", 00:19:30.242 "adrfam": "IPv4", 00:19:30.242 "traddr": "10.0.0.1", 00:19:30.242 "trsvcid": "49166" 00:19:30.242 }, 00:19:30.242 "auth": { 00:19:30.242 "state": "completed", 00:19:30.242 "digest": "sha256", 00:19:30.242 "dhgroup": "null" 00:19:30.242 } 00:19:30.242 } 00:19:30.242 ]' 00:19:30.242 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.499 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.499 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.499 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:30.499 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.499 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.499 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.499 20:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.758 20:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:19:31.726 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.726 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.726 20:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.726 20:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.726 20:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.726 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.726 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:31.726 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:31.983 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:31.984 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.984 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.984 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:31.984 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:31.984 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.984 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.984 20:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.984 20:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.984 20:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.984 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.984 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.241 00:19:32.241 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.241 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.241 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.499 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.499 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.499 20:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.499 20:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.499 20:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.499 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.499 { 00:19:32.499 "cntlid": 3, 00:19:32.499 "qid": 0, 00:19:32.499 "state": "enabled", 00:19:32.499 "thread": "nvmf_tgt_poll_group_000", 00:19:32.499 "listen_address": { 00:19:32.499 "trtype": "TCP", 00:19:32.499 "adrfam": "IPv4", 00:19:32.499 "traddr": "10.0.0.2", 00:19:32.499 "trsvcid": "4420" 00:19:32.499 }, 00:19:32.499 "peer_address": { 00:19:32.499 "trtype": "TCP", 00:19:32.499 "adrfam": "IPv4", 00:19:32.499 "traddr": "10.0.0.1", 00:19:32.499 "trsvcid": "49200" 00:19:32.499 }, 00:19:32.499 "auth": { 00:19:32.499 "state": "completed", 00:19:32.499 "digest": "sha256", 00:19:32.499 "dhgroup": "null" 00:19:32.499 } 00:19:32.499 } 00:19:32.499 ]' 00:19:32.499 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.499 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.499 20:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.499 20:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:32.499 20:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.756 20:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.756 20:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.756 20:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.014 20:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:19:33.948 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.948 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.948 20:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.948 20:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.948 20:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.948 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.948 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:33.948 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.206 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:34.206 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.206 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.206 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:34.206 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:34.206 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.206 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.206 20:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.206 20:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.206 20:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.206 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.206 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.464 00:19:34.464 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.464 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.464 20:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.722 20:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.722 20:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.722 20:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.722 20:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.722 20:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.722 20:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.722 { 00:19:34.722 "cntlid": 5, 00:19:34.722 "qid": 0, 00:19:34.722 "state": "enabled", 00:19:34.722 "thread": "nvmf_tgt_poll_group_000", 00:19:34.722 "listen_address": { 00:19:34.722 "trtype": "TCP", 00:19:34.722 "adrfam": "IPv4", 00:19:34.722 "traddr": "10.0.0.2", 00:19:34.722 "trsvcid": "4420" 00:19:34.722 }, 00:19:34.722 "peer_address": { 00:19:34.722 "trtype": "TCP", 00:19:34.722 "adrfam": "IPv4", 00:19:34.722 "traddr": "10.0.0.1", 00:19:34.722 "trsvcid": "51870" 00:19:34.722 }, 00:19:34.722 "auth": { 00:19:34.722 "state": "completed", 00:19:34.722 "digest": "sha256", 00:19:34.722 "dhgroup": "null" 00:19:34.722 } 00:19:34.722 } 00:19:34.722 ]' 00:19:34.722 20:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.979 20:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.979 20:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.979 20:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:34.979 20:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.979 20:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.979 20:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.979 20:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.237 20:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:19:36.169 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.169 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.169 20:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.169 20:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.169 20:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.169 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.169 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:36.169 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:36.427 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:36.427 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.427 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:36.427 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:36.427 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.427 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.427 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:36.427 20:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.427 20:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.427 20:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.427 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.427 20:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.685 00:19:36.685 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.685 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.685 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.943 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.943 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.943 20:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.943 20:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.943 20:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.943 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.943 { 00:19:36.943 "cntlid": 7, 00:19:36.943 "qid": 0, 00:19:36.943 "state": "enabled", 00:19:36.943 "thread": "nvmf_tgt_poll_group_000", 00:19:36.943 "listen_address": { 00:19:36.943 "trtype": "TCP", 00:19:36.943 "adrfam": "IPv4", 00:19:36.943 "traddr": "10.0.0.2", 00:19:36.943 "trsvcid": "4420" 00:19:36.943 }, 00:19:36.943 "peer_address": { 00:19:36.943 "trtype": "TCP", 00:19:36.943 "adrfam": "IPv4", 00:19:36.943 "traddr": "10.0.0.1", 00:19:36.943 "trsvcid": "51894" 00:19:36.943 }, 00:19:36.943 "auth": { 00:19:36.943 "state": "completed", 00:19:36.943 "digest": "sha256", 00:19:36.943 "dhgroup": "null" 00:19:36.943 } 00:19:36.943 } 00:19:36.943 ]' 00:19:36.943 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.943 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.943 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.943 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:36.943 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.201 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.201 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.201 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.460 20:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:19:38.392 20:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.392 20:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.392 20:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.392 20:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.392 20:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.392 20:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.392 20:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.392 20:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.392 20:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.650 20:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:38.650 20:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.650 20:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.650 20:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:38.650 20:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.650 20:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.650 20:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.650 20:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.650 20:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.650 20:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.650 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.650 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.908 00:19:38.908 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.908 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.908 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.166 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.166 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.166 20:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.166 20:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.166 20:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.166 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.166 { 00:19:39.166 "cntlid": 9, 00:19:39.166 "qid": 0, 00:19:39.166 "state": "enabled", 00:19:39.166 "thread": "nvmf_tgt_poll_group_000", 00:19:39.166 "listen_address": { 00:19:39.166 "trtype": "TCP", 00:19:39.166 "adrfam": "IPv4", 00:19:39.166 "traddr": "10.0.0.2", 00:19:39.166 "trsvcid": "4420" 00:19:39.166 }, 00:19:39.166 "peer_address": { 00:19:39.166 "trtype": "TCP", 00:19:39.166 "adrfam": "IPv4", 00:19:39.166 "traddr": "10.0.0.1", 00:19:39.166 "trsvcid": "51918" 00:19:39.166 }, 00:19:39.166 "auth": { 00:19:39.166 "state": "completed", 00:19:39.166 "digest": "sha256", 00:19:39.166 "dhgroup": "ffdhe2048" 00:19:39.166 } 00:19:39.166 } 00:19:39.166 ]' 00:19:39.166 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.166 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.166 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.166 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:39.166 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.424 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.424 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.424 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.682 20:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:19:40.615 20:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.615 20:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.615 20:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.615 20:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.615 20:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.615 20:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.615 20:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.615 20:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.874 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:40.874 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.874 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.874 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:40.874 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:40.874 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.874 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.874 20:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.874 20:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.874 20:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.874 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.874 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.132 00:19:41.132 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.132 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.132 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.389 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.389 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.390 20:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.390 20:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.390 20:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.390 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.390 { 00:19:41.390 "cntlid": 11, 00:19:41.390 "qid": 0, 00:19:41.390 "state": "enabled", 00:19:41.390 "thread": "nvmf_tgt_poll_group_000", 00:19:41.390 "listen_address": { 00:19:41.390 "trtype": "TCP", 00:19:41.390 "adrfam": "IPv4", 00:19:41.390 "traddr": "10.0.0.2", 00:19:41.390 "trsvcid": "4420" 00:19:41.390 }, 00:19:41.390 "peer_address": { 00:19:41.390 "trtype": "TCP", 00:19:41.390 "adrfam": "IPv4", 00:19:41.390 "traddr": "10.0.0.1", 00:19:41.390 "trsvcid": "51946" 00:19:41.390 }, 00:19:41.390 "auth": { 00:19:41.390 "state": "completed", 00:19:41.390 "digest": "sha256", 00:19:41.390 "dhgroup": "ffdhe2048" 00:19:41.390 } 00:19:41.390 } 00:19:41.390 ]' 00:19:41.390 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.390 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.390 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.390 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:41.648 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.648 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.648 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.648 20:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.905 20:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:19:42.838 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.838 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.838 20:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.838 20:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.838 20:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.838 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.838 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.838 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.096 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:43.096 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.096 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.096 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:43.096 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:43.096 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.096 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.096 20:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.096 20:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.096 20:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.096 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.096 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.353 00:19:43.353 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.353 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.353 20:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.611 20:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.611 20:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.611 20:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.611 20:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.611 20:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.611 20:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.611 { 00:19:43.611 "cntlid": 13, 00:19:43.611 "qid": 0, 00:19:43.611 "state": "enabled", 00:19:43.611 "thread": "nvmf_tgt_poll_group_000", 00:19:43.611 "listen_address": { 00:19:43.611 "trtype": "TCP", 00:19:43.611 "adrfam": "IPv4", 00:19:43.611 "traddr": "10.0.0.2", 00:19:43.611 "trsvcid": "4420" 00:19:43.611 }, 00:19:43.611 "peer_address": { 00:19:43.611 "trtype": "TCP", 00:19:43.611 "adrfam": "IPv4", 00:19:43.611 "traddr": "10.0.0.1", 00:19:43.611 "trsvcid": "59344" 00:19:43.611 }, 00:19:43.611 "auth": { 00:19:43.611 "state": "completed", 00:19:43.611 "digest": "sha256", 00:19:43.611 "dhgroup": "ffdhe2048" 00:19:43.611 } 00:19:43.611 } 00:19:43.611 ]' 00:19:43.611 20:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.611 20:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.611 20:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.870 20:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.870 20:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.870 20:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.870 20:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.870 20:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.127 20:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.140 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.703 00:19:45.703 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.703 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.703 20:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.703 20:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.703 20:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.703 20:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.703 20:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.703 20:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.703 20:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.703 { 00:19:45.703 "cntlid": 15, 00:19:45.703 "qid": 0, 00:19:45.703 "state": "enabled", 00:19:45.703 "thread": "nvmf_tgt_poll_group_000", 00:19:45.703 "listen_address": { 00:19:45.703 "trtype": "TCP", 00:19:45.703 "adrfam": "IPv4", 00:19:45.703 "traddr": "10.0.0.2", 00:19:45.703 "trsvcid": "4420" 00:19:45.703 }, 00:19:45.703 "peer_address": { 00:19:45.703 "trtype": "TCP", 00:19:45.703 "adrfam": "IPv4", 00:19:45.703 "traddr": "10.0.0.1", 00:19:45.703 "trsvcid": "59384" 00:19:45.703 }, 00:19:45.703 "auth": { 00:19:45.703 "state": "completed", 00:19:45.703 "digest": "sha256", 00:19:45.703 "dhgroup": "ffdhe2048" 00:19:45.703 } 00:19:45.703 } 00:19:45.703 ]' 00:19:45.703 20:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.960 20:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.960 20:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.960 20:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.960 20:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.960 20:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.960 20:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.960 20:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.241 20:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:19:47.179 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.179 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.179 20:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.179 20:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.179 20:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.179 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.179 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.179 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.179 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.435 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:47.435 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.435 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.435 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:47.435 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:47.435 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.435 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.435 20:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.435 20:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.435 20:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.435 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.435 20:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.692 00:19:47.692 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.692 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.692 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.949 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.949 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.949 20:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.949 20:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.949 20:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.949 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.949 { 00:19:47.949 "cntlid": 17, 00:19:47.949 "qid": 0, 00:19:47.949 "state": "enabled", 00:19:47.949 "thread": "nvmf_tgt_poll_group_000", 00:19:47.949 "listen_address": { 00:19:47.949 "trtype": "TCP", 00:19:47.949 "adrfam": "IPv4", 00:19:47.949 "traddr": "10.0.0.2", 00:19:47.949 "trsvcid": "4420" 00:19:47.949 }, 00:19:47.949 "peer_address": { 00:19:47.949 "trtype": "TCP", 00:19:47.949 "adrfam": "IPv4", 00:19:47.949 "traddr": "10.0.0.1", 00:19:47.949 "trsvcid": "59420" 00:19:47.949 }, 00:19:47.949 "auth": { 00:19:47.949 "state": "completed", 00:19:47.949 "digest": "sha256", 00:19:47.949 "dhgroup": "ffdhe3072" 00:19:47.949 } 00:19:47.949 } 00:19:47.949 ]' 00:19:47.949 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.949 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.949 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.949 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:47.949 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.207 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.207 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.207 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.464 20:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:19:49.397 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.397 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.397 20:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.397 20:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.397 20:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.397 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.397 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.397 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.655 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:49.655 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.655 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.655 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:49.655 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:49.655 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.655 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.655 20:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.655 20:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.655 20:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.655 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.655 20:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.913 00:19:49.913 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.914 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.914 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.171 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.172 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.172 20:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.172 20:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.172 20:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.172 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.172 { 00:19:50.172 "cntlid": 19, 00:19:50.172 "qid": 0, 00:19:50.172 "state": "enabled", 00:19:50.172 "thread": "nvmf_tgt_poll_group_000", 00:19:50.172 "listen_address": { 00:19:50.172 "trtype": "TCP", 00:19:50.172 "adrfam": "IPv4", 00:19:50.172 "traddr": "10.0.0.2", 00:19:50.172 "trsvcid": "4420" 00:19:50.172 }, 00:19:50.172 "peer_address": { 00:19:50.172 "trtype": "TCP", 00:19:50.172 "adrfam": "IPv4", 00:19:50.172 "traddr": "10.0.0.1", 00:19:50.172 "trsvcid": "59444" 00:19:50.172 }, 00:19:50.172 "auth": { 00:19:50.172 "state": "completed", 00:19:50.172 "digest": "sha256", 00:19:50.172 "dhgroup": "ffdhe3072" 00:19:50.172 } 00:19:50.172 } 00:19:50.172 ]' 00:19:50.172 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.172 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.172 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.172 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.172 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.172 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.172 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.172 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.431 20:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:19:51.365 20:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.365 20:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.365 20:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.365 20:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.365 20:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.365 20:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.365 20:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.365 20:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.623 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:51.623 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.623 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.623 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:51.623 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:51.623 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.623 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.623 20:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.623 20:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.623 20:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.623 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.623 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.189 00:19:52.189 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.189 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.189 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.447 { 00:19:52.447 "cntlid": 21, 00:19:52.447 "qid": 0, 00:19:52.447 "state": "enabled", 00:19:52.447 "thread": "nvmf_tgt_poll_group_000", 00:19:52.447 "listen_address": { 00:19:52.447 "trtype": "TCP", 00:19:52.447 "adrfam": "IPv4", 00:19:52.447 "traddr": "10.0.0.2", 00:19:52.447 "trsvcid": "4420" 00:19:52.447 }, 00:19:52.447 "peer_address": { 00:19:52.447 "trtype": "TCP", 00:19:52.447 "adrfam": "IPv4", 00:19:52.447 "traddr": "10.0.0.1", 00:19:52.447 "trsvcid": "59476" 00:19:52.447 }, 00:19:52.447 "auth": { 00:19:52.447 "state": "completed", 00:19:52.447 "digest": "sha256", 00:19:52.447 "dhgroup": "ffdhe3072" 00:19:52.447 } 00:19:52.447 } 00:19:52.447 ]' 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.447 20:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.705 20:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:19:53.642 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.642 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.642 20:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.642 20:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.642 20:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.642 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.642 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.642 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.899 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:53.899 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.899 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.899 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:53.899 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:53.899 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.899 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:53.899 20:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.899 20:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.899 20:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.899 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.899 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.155 00:19:54.156 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.156 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.156 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.413 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.413 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.413 20:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.413 20:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.671 20:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.671 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.671 { 00:19:54.671 "cntlid": 23, 00:19:54.671 "qid": 0, 00:19:54.671 "state": "enabled", 00:19:54.671 "thread": "nvmf_tgt_poll_group_000", 00:19:54.671 "listen_address": { 00:19:54.671 "trtype": "TCP", 00:19:54.671 "adrfam": "IPv4", 00:19:54.671 "traddr": "10.0.0.2", 00:19:54.671 "trsvcid": "4420" 00:19:54.671 }, 00:19:54.671 "peer_address": { 00:19:54.671 "trtype": "TCP", 00:19:54.671 "adrfam": "IPv4", 00:19:54.671 "traddr": "10.0.0.1", 00:19:54.671 "trsvcid": "46078" 00:19:54.671 }, 00:19:54.671 "auth": { 00:19:54.671 "state": "completed", 00:19:54.671 "digest": "sha256", 00:19:54.671 "dhgroup": "ffdhe3072" 00:19:54.671 } 00:19:54.671 } 00:19:54.671 ]' 00:19:54.671 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.671 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.671 20:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.671 20:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:54.671 20:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.671 20:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.671 20:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.671 20:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.929 20:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:19:55.862 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.862 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.862 20:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.862 20:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.862 20:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.862 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.862 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.862 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.862 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.119 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:56.119 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.119 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.119 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:56.119 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:56.119 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.119 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.119 20:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.119 20:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.119 20:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.119 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.119 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.682 00:19:56.682 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.682 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.682 20:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.682 20:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.682 20:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.682 20:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.682 20:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.682 20:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.682 20:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.682 { 00:19:56.682 "cntlid": 25, 00:19:56.682 "qid": 0, 00:19:56.682 "state": "enabled", 00:19:56.682 "thread": "nvmf_tgt_poll_group_000", 00:19:56.682 "listen_address": { 00:19:56.682 "trtype": "TCP", 00:19:56.682 "adrfam": "IPv4", 00:19:56.682 "traddr": "10.0.0.2", 00:19:56.682 "trsvcid": "4420" 00:19:56.682 }, 00:19:56.682 "peer_address": { 00:19:56.682 "trtype": "TCP", 00:19:56.682 "adrfam": "IPv4", 00:19:56.682 "traddr": "10.0.0.1", 00:19:56.682 "trsvcid": "46116" 00:19:56.682 }, 00:19:56.682 "auth": { 00:19:56.682 "state": "completed", 00:19:56.682 "digest": "sha256", 00:19:56.682 "dhgroup": "ffdhe4096" 00:19:56.682 } 00:19:56.682 } 00:19:56.682 ]' 00:19:56.939 20:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.939 20:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.939 20:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.939 20:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:56.939 20:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.939 20:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.939 20:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.939 20:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.197 20:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:19:58.129 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.129 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.129 20:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.129 20:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.129 20:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.129 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.129 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.129 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.387 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:58.387 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.387 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:58.387 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:58.387 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:58.387 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.387 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.387 20:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.387 20:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.387 20:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.387 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.387 20:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.644 00:19:58.644 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.644 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.644 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.900 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.900 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.900 20:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.900 20:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.900 20:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.900 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.900 { 00:19:58.900 "cntlid": 27, 00:19:58.900 "qid": 0, 00:19:58.900 "state": "enabled", 00:19:58.900 "thread": "nvmf_tgt_poll_group_000", 00:19:58.900 "listen_address": { 00:19:58.900 "trtype": "TCP", 00:19:58.900 "adrfam": "IPv4", 00:19:58.900 "traddr": "10.0.0.2", 00:19:58.900 "trsvcid": "4420" 00:19:58.900 }, 00:19:58.900 "peer_address": { 00:19:58.900 "trtype": "TCP", 00:19:58.900 "adrfam": "IPv4", 00:19:58.900 "traddr": "10.0.0.1", 00:19:58.900 "trsvcid": "46136" 00:19:58.900 }, 00:19:58.900 "auth": { 00:19:58.900 "state": "completed", 00:19:58.900 "digest": "sha256", 00:19:58.900 "dhgroup": "ffdhe4096" 00:19:58.900 } 00:19:58.900 } 00:19:58.900 ]' 00:19:58.900 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.157 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.157 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.157 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.157 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.157 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.157 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.157 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.414 20:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:20:00.349 20:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.349 20:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.349 20:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.349 20:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.349 20:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.349 20:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.349 20:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.349 20:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.607 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:00.607 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.607 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:00.607 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:00.607 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:00.607 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.607 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.607 20:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.607 20:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.607 20:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.607 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.607 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.169 00:20:01.169 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.169 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.169 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.169 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.169 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.169 20:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.169 20:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.169 20:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.169 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.169 { 00:20:01.169 "cntlid": 29, 00:20:01.169 "qid": 0, 00:20:01.169 "state": "enabled", 00:20:01.169 "thread": "nvmf_tgt_poll_group_000", 00:20:01.169 "listen_address": { 00:20:01.169 "trtype": "TCP", 00:20:01.169 "adrfam": "IPv4", 00:20:01.169 "traddr": "10.0.0.2", 00:20:01.169 "trsvcid": "4420" 00:20:01.169 }, 00:20:01.169 "peer_address": { 00:20:01.169 "trtype": "TCP", 00:20:01.169 "adrfam": "IPv4", 00:20:01.169 "traddr": "10.0.0.1", 00:20:01.169 "trsvcid": "46150" 00:20:01.169 }, 00:20:01.169 "auth": { 00:20:01.169 "state": "completed", 00:20:01.169 "digest": "sha256", 00:20:01.169 "dhgroup": "ffdhe4096" 00:20:01.169 } 00:20:01.169 } 00:20:01.169 ]' 00:20:01.169 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.426 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.426 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.426 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:01.426 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.426 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.426 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.426 20:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.683 20:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:20:02.614 20:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.614 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.614 20:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.614 20:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.614 20:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.614 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.614 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.614 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.869 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:02.869 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.869 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.869 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:02.869 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.869 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.869 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:02.869 20:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.869 20:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.869 20:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.869 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.869 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.125 00:20:03.125 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.125 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.125 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.380 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.380 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.636 20:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.636 20:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.636 20:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.636 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.636 { 00:20:03.636 "cntlid": 31, 00:20:03.636 "qid": 0, 00:20:03.636 "state": "enabled", 00:20:03.636 "thread": "nvmf_tgt_poll_group_000", 00:20:03.636 "listen_address": { 00:20:03.636 "trtype": "TCP", 00:20:03.636 "adrfam": "IPv4", 00:20:03.636 "traddr": "10.0.0.2", 00:20:03.636 "trsvcid": "4420" 00:20:03.636 }, 00:20:03.636 "peer_address": { 00:20:03.636 "trtype": "TCP", 00:20:03.636 "adrfam": "IPv4", 00:20:03.636 "traddr": "10.0.0.1", 00:20:03.636 "trsvcid": "45154" 00:20:03.636 }, 00:20:03.636 "auth": { 00:20:03.636 "state": "completed", 00:20:03.636 "digest": "sha256", 00:20:03.636 "dhgroup": "ffdhe4096" 00:20:03.636 } 00:20:03.636 } 00:20:03.636 ]' 00:20:03.636 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.636 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.636 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.636 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:03.636 20:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.636 20:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.636 20:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.636 20:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.892 20:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:20:04.822 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.822 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.822 20:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.822 20:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.822 20:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.822 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.822 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.822 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:04.822 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.079 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:05.079 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.079 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:05.079 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:05.079 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:05.079 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.080 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.080 20:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.080 20:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.080 20:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.080 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.080 20:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.644 00:20:05.644 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.644 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.644 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.901 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.901 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.901 20:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.901 20:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.901 20:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.901 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.901 { 00:20:05.901 "cntlid": 33, 00:20:05.901 "qid": 0, 00:20:05.901 "state": "enabled", 00:20:05.901 "thread": "nvmf_tgt_poll_group_000", 00:20:05.901 "listen_address": { 00:20:05.901 "trtype": "TCP", 00:20:05.901 "adrfam": "IPv4", 00:20:05.901 "traddr": "10.0.0.2", 00:20:05.901 "trsvcid": "4420" 00:20:05.901 }, 00:20:05.901 "peer_address": { 00:20:05.901 "trtype": "TCP", 00:20:05.901 "adrfam": "IPv4", 00:20:05.901 "traddr": "10.0.0.1", 00:20:05.901 "trsvcid": "45190" 00:20:05.901 }, 00:20:05.901 "auth": { 00:20:05.901 "state": "completed", 00:20:05.901 "digest": "sha256", 00:20:05.901 "dhgroup": "ffdhe6144" 00:20:05.901 } 00:20:05.901 } 00:20:05.901 ]' 00:20:05.901 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.901 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.901 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.901 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:05.901 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.158 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.158 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.158 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.158 20:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.536 20:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.100 00:20:08.100 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.100 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.100 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.357 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.357 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.357 20:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.357 20:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.357 20:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.357 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.357 { 00:20:08.357 "cntlid": 35, 00:20:08.357 "qid": 0, 00:20:08.357 "state": "enabled", 00:20:08.357 "thread": "nvmf_tgt_poll_group_000", 00:20:08.357 "listen_address": { 00:20:08.357 "trtype": "TCP", 00:20:08.357 "adrfam": "IPv4", 00:20:08.357 "traddr": "10.0.0.2", 00:20:08.357 "trsvcid": "4420" 00:20:08.357 }, 00:20:08.357 "peer_address": { 00:20:08.357 "trtype": "TCP", 00:20:08.357 "adrfam": "IPv4", 00:20:08.357 "traddr": "10.0.0.1", 00:20:08.357 "trsvcid": "45212" 00:20:08.357 }, 00:20:08.357 "auth": { 00:20:08.357 "state": "completed", 00:20:08.357 "digest": "sha256", 00:20:08.357 "dhgroup": "ffdhe6144" 00:20:08.357 } 00:20:08.357 } 00:20:08.357 ]' 00:20:08.357 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.357 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.357 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.357 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:08.357 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.614 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.614 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.614 20:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.872 20:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:20:09.805 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.806 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.806 20:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.806 20:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.806 20:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.806 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.806 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:09.806 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:10.063 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:10.063 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.063 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:10.063 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:10.063 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:10.063 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.063 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.063 20:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.063 20:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.063 20:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.063 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.063 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.629 00:20:10.629 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.629 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.629 20:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.629 20:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.629 20:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.629 20:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.629 20:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.887 20:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.887 20:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.887 { 00:20:10.887 "cntlid": 37, 00:20:10.887 "qid": 0, 00:20:10.887 "state": "enabled", 00:20:10.887 "thread": "nvmf_tgt_poll_group_000", 00:20:10.887 "listen_address": { 00:20:10.887 "trtype": "TCP", 00:20:10.887 "adrfam": "IPv4", 00:20:10.887 "traddr": "10.0.0.2", 00:20:10.887 "trsvcid": "4420" 00:20:10.887 }, 00:20:10.887 "peer_address": { 00:20:10.887 "trtype": "TCP", 00:20:10.887 "adrfam": "IPv4", 00:20:10.887 "traddr": "10.0.0.1", 00:20:10.887 "trsvcid": "45238" 00:20:10.887 }, 00:20:10.887 "auth": { 00:20:10.887 "state": "completed", 00:20:10.887 "digest": "sha256", 00:20:10.887 "dhgroup": "ffdhe6144" 00:20:10.887 } 00:20:10.887 } 00:20:10.887 ]' 00:20:10.887 20:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.887 20:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.887 20:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.887 20:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:10.887 20:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.887 20:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.887 20:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.887 20:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.145 20:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:20:12.113 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.113 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.113 20:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.113 20:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.113 20:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.113 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.113 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:12.113 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:12.372 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:12.372 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.372 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:12.372 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:12.372 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:12.372 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.372 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:12.372 20:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.372 20:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.372 20:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.372 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.372 20:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.938 00:20:12.938 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.938 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.938 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.196 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.196 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.196 20:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.196 20:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.196 20:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.196 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.196 { 00:20:13.196 "cntlid": 39, 00:20:13.196 "qid": 0, 00:20:13.196 "state": "enabled", 00:20:13.196 "thread": "nvmf_tgt_poll_group_000", 00:20:13.196 "listen_address": { 00:20:13.196 "trtype": "TCP", 00:20:13.196 "adrfam": "IPv4", 00:20:13.196 "traddr": "10.0.0.2", 00:20:13.196 "trsvcid": "4420" 00:20:13.196 }, 00:20:13.196 "peer_address": { 00:20:13.196 "trtype": "TCP", 00:20:13.196 "adrfam": "IPv4", 00:20:13.196 "traddr": "10.0.0.1", 00:20:13.196 "trsvcid": "40440" 00:20:13.196 }, 00:20:13.196 "auth": { 00:20:13.196 "state": "completed", 00:20:13.196 "digest": "sha256", 00:20:13.196 "dhgroup": "ffdhe6144" 00:20:13.196 } 00:20:13.196 } 00:20:13.196 ]' 00:20:13.196 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.196 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.196 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.196 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:13.196 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.453 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.453 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.453 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.711 20:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:20:14.645 20:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.645 20:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.645 20:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.645 20:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.645 20:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.645 20:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.645 20:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.645 20:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:14.645 20:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:14.911 20:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:14.911 20:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.911 20:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:14.911 20:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:14.911 20:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:14.911 20:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.911 20:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.911 20:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.911 20:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.911 20:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.911 20:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.911 20:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.881 00:20:15.881 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.881 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.881 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.881 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.881 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.881 20:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.881 20:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.881 20:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.881 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.881 { 00:20:15.881 "cntlid": 41, 00:20:15.881 "qid": 0, 00:20:15.881 "state": "enabled", 00:20:15.881 "thread": "nvmf_tgt_poll_group_000", 00:20:15.881 "listen_address": { 00:20:15.881 "trtype": "TCP", 00:20:15.881 "adrfam": "IPv4", 00:20:15.881 "traddr": "10.0.0.2", 00:20:15.881 "trsvcid": "4420" 00:20:15.881 }, 00:20:15.881 "peer_address": { 00:20:15.881 "trtype": "TCP", 00:20:15.881 "adrfam": "IPv4", 00:20:15.881 "traddr": "10.0.0.1", 00:20:15.881 "trsvcid": "40476" 00:20:15.881 }, 00:20:15.881 "auth": { 00:20:15.881 "state": "completed", 00:20:15.881 "digest": "sha256", 00:20:15.881 "dhgroup": "ffdhe8192" 00:20:15.881 } 00:20:15.881 } 00:20:15.881 ]' 00:20:15.881 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.139 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.139 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.139 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:16.139 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.139 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.139 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.139 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.397 20:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:20:17.331 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.331 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.331 20:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.331 20:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.331 20:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.331 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.331 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:17.331 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:17.589 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:17.589 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.589 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:17.589 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:17.589 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:17.589 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.589 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.589 20:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.589 20:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.589 20:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.589 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.589 20:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.523 00:20:18.523 20:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.523 20:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.523 20:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.781 { 00:20:18.781 "cntlid": 43, 00:20:18.781 "qid": 0, 00:20:18.781 "state": "enabled", 00:20:18.781 "thread": "nvmf_tgt_poll_group_000", 00:20:18.781 "listen_address": { 00:20:18.781 "trtype": "TCP", 00:20:18.781 "adrfam": "IPv4", 00:20:18.781 "traddr": "10.0.0.2", 00:20:18.781 "trsvcid": "4420" 00:20:18.781 }, 00:20:18.781 "peer_address": { 00:20:18.781 "trtype": "TCP", 00:20:18.781 "adrfam": "IPv4", 00:20:18.781 "traddr": "10.0.0.1", 00:20:18.781 "trsvcid": "40504" 00:20:18.781 }, 00:20:18.781 "auth": { 00:20:18.781 "state": "completed", 00:20:18.781 "digest": "sha256", 00:20:18.781 "dhgroup": "ffdhe8192" 00:20:18.781 } 00:20:18.781 } 00:20:18.781 ]' 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.781 20:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.039 20:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:20:19.972 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.972 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.972 20:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.972 20:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.972 20:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.972 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.972 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:19.972 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.538 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:20.538 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.538 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:20.538 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:20.538 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:20.538 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.538 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.538 20:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.538 20:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.538 20:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.538 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.538 20:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.470 00:20:21.470 20:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.470 20:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.470 20:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.470 20:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.470 20:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.470 20:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.470 20:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.470 20:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.470 20:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.470 { 00:20:21.470 "cntlid": 45, 00:20:21.470 "qid": 0, 00:20:21.470 "state": "enabled", 00:20:21.470 "thread": "nvmf_tgt_poll_group_000", 00:20:21.470 "listen_address": { 00:20:21.470 "trtype": "TCP", 00:20:21.470 "adrfam": "IPv4", 00:20:21.470 "traddr": "10.0.0.2", 00:20:21.470 "trsvcid": "4420" 00:20:21.470 }, 00:20:21.470 "peer_address": { 00:20:21.470 "trtype": "TCP", 00:20:21.470 "adrfam": "IPv4", 00:20:21.470 "traddr": "10.0.0.1", 00:20:21.470 "trsvcid": "40534" 00:20:21.470 }, 00:20:21.470 "auth": { 00:20:21.470 "state": "completed", 00:20:21.470 "digest": "sha256", 00:20:21.470 "dhgroup": "ffdhe8192" 00:20:21.470 } 00:20:21.470 } 00:20:21.470 ]' 00:20:21.470 20:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.726 20:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.726 20:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.726 20:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.726 20:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.726 20:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.726 20:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.726 20:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.983 20:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:20:22.958 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.958 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.958 20:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.958 20:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.958 20:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.958 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.958 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:22.958 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.215 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:23.215 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.215 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:23.215 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.215 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:23.215 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.215 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:23.215 20:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.215 20:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.215 20:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.215 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.215 20:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.149 00:20:24.149 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.149 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.149 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.406 { 00:20:24.406 "cntlid": 47, 00:20:24.406 "qid": 0, 00:20:24.406 "state": "enabled", 00:20:24.406 "thread": "nvmf_tgt_poll_group_000", 00:20:24.406 "listen_address": { 00:20:24.406 "trtype": "TCP", 00:20:24.406 "adrfam": "IPv4", 00:20:24.406 "traddr": "10.0.0.2", 00:20:24.406 "trsvcid": "4420" 00:20:24.406 }, 00:20:24.406 "peer_address": { 00:20:24.406 "trtype": "TCP", 00:20:24.406 "adrfam": "IPv4", 00:20:24.406 "traddr": "10.0.0.1", 00:20:24.406 "trsvcid": "59618" 00:20:24.406 }, 00:20:24.406 "auth": { 00:20:24.406 "state": "completed", 00:20:24.406 "digest": "sha256", 00:20:24.406 "dhgroup": "ffdhe8192" 00:20:24.406 } 00:20:24.406 } 00:20:24.406 ]' 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.406 20:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.664 20:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:20:25.598 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.598 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.598 20:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.598 20:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.598 20:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.598 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:25.598 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.598 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.598 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.598 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.856 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:25.856 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.856 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.856 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:25.856 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:25.856 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.856 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.856 20:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.856 20:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.856 20:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.856 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.856 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.422 00:20:26.422 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.422 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.422 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.679 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.679 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.679 20:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.679 20:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.679 20:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.679 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.679 { 00:20:26.679 "cntlid": 49, 00:20:26.679 "qid": 0, 00:20:26.679 "state": "enabled", 00:20:26.679 "thread": "nvmf_tgt_poll_group_000", 00:20:26.679 "listen_address": { 00:20:26.679 "trtype": "TCP", 00:20:26.679 "adrfam": "IPv4", 00:20:26.679 "traddr": "10.0.0.2", 00:20:26.679 "trsvcid": "4420" 00:20:26.679 }, 00:20:26.679 "peer_address": { 00:20:26.679 "trtype": "TCP", 00:20:26.679 "adrfam": "IPv4", 00:20:26.679 "traddr": "10.0.0.1", 00:20:26.679 "trsvcid": "59644" 00:20:26.679 }, 00:20:26.679 "auth": { 00:20:26.679 "state": "completed", 00:20:26.679 "digest": "sha384", 00:20:26.679 "dhgroup": "null" 00:20:26.679 } 00:20:26.679 } 00:20:26.679 ]' 00:20:26.679 20:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.679 20:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.679 20:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.679 20:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:26.679 20:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.679 20:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.679 20:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.679 20:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.954 20:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:20:27.886 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.886 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.886 20:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.886 20:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.886 20:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.886 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.886 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:27.886 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:28.144 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:28.144 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.144 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.144 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:28.144 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:28.144 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.144 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.144 20:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.144 20:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.144 20:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.144 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.144 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.401 00:20:28.401 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.401 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.401 20:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.659 20:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.659 20:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.659 20:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.659 20:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.659 20:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.659 20:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.659 { 00:20:28.659 "cntlid": 51, 00:20:28.659 "qid": 0, 00:20:28.659 "state": "enabled", 00:20:28.659 "thread": "nvmf_tgt_poll_group_000", 00:20:28.659 "listen_address": { 00:20:28.659 "trtype": "TCP", 00:20:28.659 "adrfam": "IPv4", 00:20:28.659 "traddr": "10.0.0.2", 00:20:28.659 "trsvcid": "4420" 00:20:28.659 }, 00:20:28.659 "peer_address": { 00:20:28.659 "trtype": "TCP", 00:20:28.659 "adrfam": "IPv4", 00:20:28.659 "traddr": "10.0.0.1", 00:20:28.659 "trsvcid": "59662" 00:20:28.659 }, 00:20:28.659 "auth": { 00:20:28.659 "state": "completed", 00:20:28.659 "digest": "sha384", 00:20:28.659 "dhgroup": "null" 00:20:28.659 } 00:20:28.659 } 00:20:28.659 ]' 00:20:28.659 20:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.916 20:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.916 20:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.916 20:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:28.916 20:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.916 20:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.916 20:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.916 20:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.173 20:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:20:30.106 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.106 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.106 20:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.106 20:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.106 20:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.106 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.106 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:30.106 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:30.364 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:30.364 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.364 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.364 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:30.364 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:30.364 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.364 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.364 20:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.364 20:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.364 20:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.364 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.364 20:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.621 00:20:30.621 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.621 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.621 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.878 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.878 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.878 20:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.878 20:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.135 20:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.135 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.135 { 00:20:31.135 "cntlid": 53, 00:20:31.135 "qid": 0, 00:20:31.135 "state": "enabled", 00:20:31.135 "thread": "nvmf_tgt_poll_group_000", 00:20:31.135 "listen_address": { 00:20:31.135 "trtype": "TCP", 00:20:31.135 "adrfam": "IPv4", 00:20:31.135 "traddr": "10.0.0.2", 00:20:31.135 "trsvcid": "4420" 00:20:31.135 }, 00:20:31.135 "peer_address": { 00:20:31.135 "trtype": "TCP", 00:20:31.135 "adrfam": "IPv4", 00:20:31.135 "traddr": "10.0.0.1", 00:20:31.135 "trsvcid": "59692" 00:20:31.135 }, 00:20:31.135 "auth": { 00:20:31.135 "state": "completed", 00:20:31.135 "digest": "sha384", 00:20:31.135 "dhgroup": "null" 00:20:31.135 } 00:20:31.135 } 00:20:31.135 ]' 00:20:31.135 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.135 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.135 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.135 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:31.135 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.135 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.135 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.135 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.393 20:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:20:32.325 20:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.325 20:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.325 20:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.325 20:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.325 20:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.325 20:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.325 20:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.325 20:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.583 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:32.583 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.583 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.583 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:32.583 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:32.583 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.583 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:32.583 20:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.583 20:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.583 20:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.583 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.583 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.149 00:20:33.149 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.149 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.149 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.149 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.149 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.149 20:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.149 20:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.149 20:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.149 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.149 { 00:20:33.149 "cntlid": 55, 00:20:33.149 "qid": 0, 00:20:33.149 "state": "enabled", 00:20:33.149 "thread": "nvmf_tgt_poll_group_000", 00:20:33.149 "listen_address": { 00:20:33.149 "trtype": "TCP", 00:20:33.149 "adrfam": "IPv4", 00:20:33.149 "traddr": "10.0.0.2", 00:20:33.149 "trsvcid": "4420" 00:20:33.149 }, 00:20:33.149 "peer_address": { 00:20:33.149 "trtype": "TCP", 00:20:33.149 "adrfam": "IPv4", 00:20:33.149 "traddr": "10.0.0.1", 00:20:33.149 "trsvcid": "40972" 00:20:33.149 }, 00:20:33.149 "auth": { 00:20:33.149 "state": "completed", 00:20:33.149 "digest": "sha384", 00:20:33.149 "dhgroup": "null" 00:20:33.149 } 00:20:33.149 } 00:20:33.149 ]' 00:20:33.149 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.407 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.407 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.407 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:33.407 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.407 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.407 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.407 20:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.665 20:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:20:34.605 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.605 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.605 20:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.605 20:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.605 20:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.605 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.605 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.605 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.605 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.864 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:34.864 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.864 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.864 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:34.864 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:34.864 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.864 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.864 20:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.864 20:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.864 20:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.864 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.864 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.122 00:20:35.379 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.379 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.379 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.638 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.638 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.638 20:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.638 20:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.638 20:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.638 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.638 { 00:20:35.638 "cntlid": 57, 00:20:35.638 "qid": 0, 00:20:35.638 "state": "enabled", 00:20:35.638 "thread": "nvmf_tgt_poll_group_000", 00:20:35.638 "listen_address": { 00:20:35.638 "trtype": "TCP", 00:20:35.638 "adrfam": "IPv4", 00:20:35.638 "traddr": "10.0.0.2", 00:20:35.638 "trsvcid": "4420" 00:20:35.638 }, 00:20:35.638 "peer_address": { 00:20:35.638 "trtype": "TCP", 00:20:35.638 "adrfam": "IPv4", 00:20:35.638 "traddr": "10.0.0.1", 00:20:35.638 "trsvcid": "40990" 00:20:35.638 }, 00:20:35.638 "auth": { 00:20:35.638 "state": "completed", 00:20:35.638 "digest": "sha384", 00:20:35.638 "dhgroup": "ffdhe2048" 00:20:35.638 } 00:20:35.638 } 00:20:35.638 ]' 00:20:35.638 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.638 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.638 20:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.638 20:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:35.638 20:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.638 20:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.638 20:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.638 20:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.895 20:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:20:36.830 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.830 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.830 20:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.830 20:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.830 20:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.830 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.830 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.830 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.088 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:37.088 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.088 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:37.088 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:37.088 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:37.088 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.088 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.088 20:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.088 20:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.088 20:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.088 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.088 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.352 00:20:37.352 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.352 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.352 20:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.659 20:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.659 20:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.659 20:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.659 20:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.659 20:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.659 20:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.659 { 00:20:37.659 "cntlid": 59, 00:20:37.659 "qid": 0, 00:20:37.659 "state": "enabled", 00:20:37.659 "thread": "nvmf_tgt_poll_group_000", 00:20:37.659 "listen_address": { 00:20:37.659 "trtype": "TCP", 00:20:37.659 "adrfam": "IPv4", 00:20:37.659 "traddr": "10.0.0.2", 00:20:37.659 "trsvcid": "4420" 00:20:37.659 }, 00:20:37.659 "peer_address": { 00:20:37.659 "trtype": "TCP", 00:20:37.659 "adrfam": "IPv4", 00:20:37.659 "traddr": "10.0.0.1", 00:20:37.659 "trsvcid": "41024" 00:20:37.659 }, 00:20:37.659 "auth": { 00:20:37.659 "state": "completed", 00:20:37.659 "digest": "sha384", 00:20:37.659 "dhgroup": "ffdhe2048" 00:20:37.659 } 00:20:37.659 } 00:20:37.659 ]' 00:20:37.659 20:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.942 20:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.943 20:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.943 20:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:37.943 20:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.943 20:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.943 20:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.943 20:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.202 20:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:20:39.139 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.139 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.139 20:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.139 20:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.139 20:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.139 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.139 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:39.139 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:39.397 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:39.397 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.397 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:39.397 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:39.397 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:39.397 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.397 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.397 20:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.397 20:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.397 20:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.397 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.397 20:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.654 00:20:39.654 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.654 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.654 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.911 { 00:20:39.911 "cntlid": 61, 00:20:39.911 "qid": 0, 00:20:39.911 "state": "enabled", 00:20:39.911 "thread": "nvmf_tgt_poll_group_000", 00:20:39.911 "listen_address": { 00:20:39.911 "trtype": "TCP", 00:20:39.911 "adrfam": "IPv4", 00:20:39.911 "traddr": "10.0.0.2", 00:20:39.911 "trsvcid": "4420" 00:20:39.911 }, 00:20:39.911 "peer_address": { 00:20:39.911 "trtype": "TCP", 00:20:39.911 "adrfam": "IPv4", 00:20:39.911 "traddr": "10.0.0.1", 00:20:39.911 "trsvcid": "41048" 00:20:39.911 }, 00:20:39.911 "auth": { 00:20:39.911 "state": "completed", 00:20:39.911 "digest": "sha384", 00:20:39.911 "dhgroup": "ffdhe2048" 00:20:39.911 } 00:20:39.911 } 00:20:39.911 ]' 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.911 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.169 20:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.539 20:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.796 00:20:41.796 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.796 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.796 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.054 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.054 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.054 20:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.054 20:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.054 20:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.054 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.054 { 00:20:42.054 "cntlid": 63, 00:20:42.054 "qid": 0, 00:20:42.054 "state": "enabled", 00:20:42.054 "thread": "nvmf_tgt_poll_group_000", 00:20:42.054 "listen_address": { 00:20:42.054 "trtype": "TCP", 00:20:42.054 "adrfam": "IPv4", 00:20:42.054 "traddr": "10.0.0.2", 00:20:42.054 "trsvcid": "4420" 00:20:42.054 }, 00:20:42.054 "peer_address": { 00:20:42.054 "trtype": "TCP", 00:20:42.054 "adrfam": "IPv4", 00:20:42.054 "traddr": "10.0.0.1", 00:20:42.054 "trsvcid": "41084" 00:20:42.054 }, 00:20:42.054 "auth": { 00:20:42.054 "state": "completed", 00:20:42.054 "digest": "sha384", 00:20:42.054 "dhgroup": "ffdhe2048" 00:20:42.054 } 00:20:42.054 } 00:20:42.054 ]' 00:20:42.054 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.312 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.312 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.312 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:42.312 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.312 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.312 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.312 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.569 20:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:20:43.502 20:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.502 20:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.502 20:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.502 20:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.502 20:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.502 20:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.502 20:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.502 20:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.502 20:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.761 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:43.761 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.761 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.761 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:43.761 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:43.761 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.761 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.761 20:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.761 20:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.761 20:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.761 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.761 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.020 00:20:44.020 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.020 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.020 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.278 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.278 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.278 20:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.278 20:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.278 20:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.278 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.278 { 00:20:44.278 "cntlid": 65, 00:20:44.278 "qid": 0, 00:20:44.278 "state": "enabled", 00:20:44.278 "thread": "nvmf_tgt_poll_group_000", 00:20:44.278 "listen_address": { 00:20:44.278 "trtype": "TCP", 00:20:44.278 "adrfam": "IPv4", 00:20:44.278 "traddr": "10.0.0.2", 00:20:44.278 "trsvcid": "4420" 00:20:44.278 }, 00:20:44.278 "peer_address": { 00:20:44.278 "trtype": "TCP", 00:20:44.278 "adrfam": "IPv4", 00:20:44.278 "traddr": "10.0.0.1", 00:20:44.278 "trsvcid": "39930" 00:20:44.278 }, 00:20:44.278 "auth": { 00:20:44.278 "state": "completed", 00:20:44.278 "digest": "sha384", 00:20:44.278 "dhgroup": "ffdhe3072" 00:20:44.278 } 00:20:44.278 } 00:20:44.278 ]' 00:20:44.278 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.536 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.536 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.536 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:44.536 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.536 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.536 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.536 20:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.794 20:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:20:45.727 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.727 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.727 20:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.727 20:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.727 20:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.727 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.727 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.727 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.985 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:45.985 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.985 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.985 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:45.985 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:45.985 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.985 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.985 20:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.985 20:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.985 20:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.985 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.985 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.551 00:20:46.551 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.551 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.551 20:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.551 20:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.551 20:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.551 20:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.551 20:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.551 20:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.551 20:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.551 { 00:20:46.551 "cntlid": 67, 00:20:46.551 "qid": 0, 00:20:46.551 "state": "enabled", 00:20:46.551 "thread": "nvmf_tgt_poll_group_000", 00:20:46.551 "listen_address": { 00:20:46.551 "trtype": "TCP", 00:20:46.551 "adrfam": "IPv4", 00:20:46.551 "traddr": "10.0.0.2", 00:20:46.551 "trsvcid": "4420" 00:20:46.551 }, 00:20:46.551 "peer_address": { 00:20:46.551 "trtype": "TCP", 00:20:46.551 "adrfam": "IPv4", 00:20:46.551 "traddr": "10.0.0.1", 00:20:46.551 "trsvcid": "39942" 00:20:46.551 }, 00:20:46.551 "auth": { 00:20:46.551 "state": "completed", 00:20:46.551 "digest": "sha384", 00:20:46.551 "dhgroup": "ffdhe3072" 00:20:46.551 } 00:20:46.551 } 00:20:46.551 ]' 00:20:46.551 20:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.809 20:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.809 20:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.809 20:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:46.809 20:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.809 20:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.809 20:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.809 20:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.067 20:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:20:48.000 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.000 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.000 20:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.000 20:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.000 20:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.000 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.000 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:48.000 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:48.258 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:48.258 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.258 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.258 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:48.258 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:48.258 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.258 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.258 20:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.258 20:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.258 20:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.258 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.258 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.516 00:20:48.516 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.516 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.516 20:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.776 20:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.776 20:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.776 20:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.776 20:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.776 20:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.776 20:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.776 { 00:20:48.776 "cntlid": 69, 00:20:48.776 "qid": 0, 00:20:48.776 "state": "enabled", 00:20:48.776 "thread": "nvmf_tgt_poll_group_000", 00:20:48.776 "listen_address": { 00:20:48.776 "trtype": "TCP", 00:20:48.776 "adrfam": "IPv4", 00:20:48.776 "traddr": "10.0.0.2", 00:20:48.776 "trsvcid": "4420" 00:20:48.776 }, 00:20:48.776 "peer_address": { 00:20:48.776 "trtype": "TCP", 00:20:48.776 "adrfam": "IPv4", 00:20:48.776 "traddr": "10.0.0.1", 00:20:48.776 "trsvcid": "39970" 00:20:48.776 }, 00:20:48.776 "auth": { 00:20:48.776 "state": "completed", 00:20:48.776 "digest": "sha384", 00:20:48.776 "dhgroup": "ffdhe3072" 00:20:48.776 } 00:20:48.776 } 00:20:48.776 ]' 00:20:48.776 20:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.776 20:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.776 20:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.034 20:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.034 20:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.034 20:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.034 20:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.034 20:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.292 20:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:20:50.235 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.235 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.235 20:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.235 20:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.235 20:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.235 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.235 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:50.235 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:50.493 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:50.493 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.493 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:50.493 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:50.493 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:50.493 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.493 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:50.493 20:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.493 20:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.493 20:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.493 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.493 20:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.751 00:20:50.751 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.751 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.751 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.009 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.009 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.009 20:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.009 20:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.009 20:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.009 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.009 { 00:20:51.009 "cntlid": 71, 00:20:51.009 "qid": 0, 00:20:51.009 "state": "enabled", 00:20:51.009 "thread": "nvmf_tgt_poll_group_000", 00:20:51.009 "listen_address": { 00:20:51.009 "trtype": "TCP", 00:20:51.009 "adrfam": "IPv4", 00:20:51.009 "traddr": "10.0.0.2", 00:20:51.009 "trsvcid": "4420" 00:20:51.009 }, 00:20:51.009 "peer_address": { 00:20:51.009 "trtype": "TCP", 00:20:51.009 "adrfam": "IPv4", 00:20:51.009 "traddr": "10.0.0.1", 00:20:51.009 "trsvcid": "40002" 00:20:51.009 }, 00:20:51.009 "auth": { 00:20:51.009 "state": "completed", 00:20:51.009 "digest": "sha384", 00:20:51.009 "dhgroup": "ffdhe3072" 00:20:51.009 } 00:20:51.009 } 00:20:51.009 ]' 00:20:51.009 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.009 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.009 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.009 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.009 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.267 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.267 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.267 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.525 20:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:20:52.458 20:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.458 20:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.458 20:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.458 20:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.458 20:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.458 20:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.458 20:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.458 20:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.458 20:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.716 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:52.716 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.716 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:52.716 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:52.716 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:52.716 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.716 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.716 20:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.716 20:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.716 20:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.716 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.716 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.975 00:20:52.975 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.975 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.975 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.233 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.233 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.233 20:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.233 20:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.233 20:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.233 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.233 { 00:20:53.233 "cntlid": 73, 00:20:53.233 "qid": 0, 00:20:53.233 "state": "enabled", 00:20:53.233 "thread": "nvmf_tgt_poll_group_000", 00:20:53.233 "listen_address": { 00:20:53.233 "trtype": "TCP", 00:20:53.233 "adrfam": "IPv4", 00:20:53.233 "traddr": "10.0.0.2", 00:20:53.233 "trsvcid": "4420" 00:20:53.233 }, 00:20:53.233 "peer_address": { 00:20:53.233 "trtype": "TCP", 00:20:53.233 "adrfam": "IPv4", 00:20:53.233 "traddr": "10.0.0.1", 00:20:53.233 "trsvcid": "51316" 00:20:53.233 }, 00:20:53.233 "auth": { 00:20:53.233 "state": "completed", 00:20:53.233 "digest": "sha384", 00:20:53.233 "dhgroup": "ffdhe4096" 00:20:53.233 } 00:20:53.233 } 00:20:53.233 ]' 00:20:53.233 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.233 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.233 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.491 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:53.491 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.491 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.491 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.491 20:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.749 20:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:20:54.683 20:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.683 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.683 20:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.683 20:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.683 20:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.683 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.683 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.683 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.940 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:54.940 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.940 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:54.940 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:54.940 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:54.940 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.940 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.940 20:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.940 20:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.940 20:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.940 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.940 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.197 00:20:55.455 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.455 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.455 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.713 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.713 20:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.713 20:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.713 20:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.713 20:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.713 20:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.713 { 00:20:55.713 "cntlid": 75, 00:20:55.713 "qid": 0, 00:20:55.713 "state": "enabled", 00:20:55.713 "thread": "nvmf_tgt_poll_group_000", 00:20:55.713 "listen_address": { 00:20:55.713 "trtype": "TCP", 00:20:55.713 "adrfam": "IPv4", 00:20:55.713 "traddr": "10.0.0.2", 00:20:55.713 "trsvcid": "4420" 00:20:55.713 }, 00:20:55.713 "peer_address": { 00:20:55.713 "trtype": "TCP", 00:20:55.713 "adrfam": "IPv4", 00:20:55.713 "traddr": "10.0.0.1", 00:20:55.713 "trsvcid": "51350" 00:20:55.713 }, 00:20:55.713 "auth": { 00:20:55.713 "state": "completed", 00:20:55.713 "digest": "sha384", 00:20:55.713 "dhgroup": "ffdhe4096" 00:20:55.713 } 00:20:55.713 } 00:20:55.713 ]' 00:20:55.713 20:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.713 20:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.713 20:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.713 20:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:55.713 20:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.713 20:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.713 20:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.713 20:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.971 20:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:20:56.906 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.906 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.906 20:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.906 20:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.906 20:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.906 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.906 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:56.906 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:57.165 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:57.165 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.165 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:57.165 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:57.165 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:57.165 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.165 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.165 20:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.165 20:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.165 20:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.165 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.165 20:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.732 00:20:57.732 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.732 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.732 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.990 { 00:20:57.990 "cntlid": 77, 00:20:57.990 "qid": 0, 00:20:57.990 "state": "enabled", 00:20:57.990 "thread": "nvmf_tgt_poll_group_000", 00:20:57.990 "listen_address": { 00:20:57.990 "trtype": "TCP", 00:20:57.990 "adrfam": "IPv4", 00:20:57.990 "traddr": "10.0.0.2", 00:20:57.990 "trsvcid": "4420" 00:20:57.990 }, 00:20:57.990 "peer_address": { 00:20:57.990 "trtype": "TCP", 00:20:57.990 "adrfam": "IPv4", 00:20:57.990 "traddr": "10.0.0.1", 00:20:57.990 "trsvcid": "51370" 00:20:57.990 }, 00:20:57.990 "auth": { 00:20:57.990 "state": "completed", 00:20:57.990 "digest": "sha384", 00:20:57.990 "dhgroup": "ffdhe4096" 00:20:57.990 } 00:20:57.990 } 00:20:57.990 ]' 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.990 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.248 20:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:20:59.618 20:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.618 20:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.618 20:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.618 20:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.618 20:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.618 20:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.618 20:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.618 20:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.618 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:59.618 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.618 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:59.618 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:59.618 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:59.618 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.618 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:59.619 20:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.619 20:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.619 20:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.619 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.619 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.182 00:21:00.182 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.182 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.182 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.439 { 00:21:00.439 "cntlid": 79, 00:21:00.439 "qid": 0, 00:21:00.439 "state": "enabled", 00:21:00.439 "thread": "nvmf_tgt_poll_group_000", 00:21:00.439 "listen_address": { 00:21:00.439 "trtype": "TCP", 00:21:00.439 "adrfam": "IPv4", 00:21:00.439 "traddr": "10.0.0.2", 00:21:00.439 "trsvcid": "4420" 00:21:00.439 }, 00:21:00.439 "peer_address": { 00:21:00.439 "trtype": "TCP", 00:21:00.439 "adrfam": "IPv4", 00:21:00.439 "traddr": "10.0.0.1", 00:21:00.439 "trsvcid": "51398" 00:21:00.439 }, 00:21:00.439 "auth": { 00:21:00.439 "state": "completed", 00:21:00.439 "digest": "sha384", 00:21:00.439 "dhgroup": "ffdhe4096" 00:21:00.439 } 00:21:00.439 } 00:21:00.439 ]' 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.439 20:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.696 20:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:21:01.627 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.627 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.627 20:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.627 20:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.627 20:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.627 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.627 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.627 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.627 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.884 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:01.884 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.884 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.884 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:01.884 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:01.884 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.884 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.884 20:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.884 20:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.884 20:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.884 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.884 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.449 00:21:02.449 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.449 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.449 20:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.707 20:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.707 20:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.707 20:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.707 20:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.707 20:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.707 20:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.707 { 00:21:02.707 "cntlid": 81, 00:21:02.707 "qid": 0, 00:21:02.707 "state": "enabled", 00:21:02.707 "thread": "nvmf_tgt_poll_group_000", 00:21:02.707 "listen_address": { 00:21:02.707 "trtype": "TCP", 00:21:02.707 "adrfam": "IPv4", 00:21:02.707 "traddr": "10.0.0.2", 00:21:02.707 "trsvcid": "4420" 00:21:02.707 }, 00:21:02.707 "peer_address": { 00:21:02.707 "trtype": "TCP", 00:21:02.707 "adrfam": "IPv4", 00:21:02.707 "traddr": "10.0.0.1", 00:21:02.707 "trsvcid": "51440" 00:21:02.707 }, 00:21:02.707 "auth": { 00:21:02.707 "state": "completed", 00:21:02.707 "digest": "sha384", 00:21:02.707 "dhgroup": "ffdhe6144" 00:21:02.707 } 00:21:02.707 } 00:21:02.707 ]' 00:21:02.707 20:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.707 20:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.707 20:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.707 20:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:02.707 20:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.964 20:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.964 20:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.964 20:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.222 20:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:21:04.154 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.154 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.154 20:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.154 20:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.154 20:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.154 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.154 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:04.154 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:04.412 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:04.412 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.412 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:04.412 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:04.412 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:04.412 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.412 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.412 20:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.412 20:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.412 20:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.412 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.412 20:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.983 00:21:04.983 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.983 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.983 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.241 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.241 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.241 20:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.241 20:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.241 20:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.241 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.241 { 00:21:05.241 "cntlid": 83, 00:21:05.241 "qid": 0, 00:21:05.241 "state": "enabled", 00:21:05.241 "thread": "nvmf_tgt_poll_group_000", 00:21:05.241 "listen_address": { 00:21:05.241 "trtype": "TCP", 00:21:05.241 "adrfam": "IPv4", 00:21:05.241 "traddr": "10.0.0.2", 00:21:05.241 "trsvcid": "4420" 00:21:05.241 }, 00:21:05.241 "peer_address": { 00:21:05.241 "trtype": "TCP", 00:21:05.241 "adrfam": "IPv4", 00:21:05.241 "traddr": "10.0.0.1", 00:21:05.241 "trsvcid": "35788" 00:21:05.241 }, 00:21:05.241 "auth": { 00:21:05.241 "state": "completed", 00:21:05.241 "digest": "sha384", 00:21:05.241 "dhgroup": "ffdhe6144" 00:21:05.241 } 00:21:05.242 } 00:21:05.242 ]' 00:21:05.242 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.242 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.242 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.242 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.242 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.242 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.242 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.242 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.499 20:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:21:06.459 20:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.459 20:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.459 20:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.459 20:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.459 20:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.459 20:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.459 20:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:06.459 20:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:06.717 20:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:06.717 20:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.717 20:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:06.717 20:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:06.717 20:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:06.717 20:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.717 20:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.717 20:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.717 20:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.717 20:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.717 20:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.717 20:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.283 00:21:07.283 20:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.283 20:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.283 20:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.541 20:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.541 20:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.541 20:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.541 20:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.541 20:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.541 20:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.541 { 00:21:07.541 "cntlid": 85, 00:21:07.541 "qid": 0, 00:21:07.541 "state": "enabled", 00:21:07.541 "thread": "nvmf_tgt_poll_group_000", 00:21:07.541 "listen_address": { 00:21:07.541 "trtype": "TCP", 00:21:07.541 "adrfam": "IPv4", 00:21:07.541 "traddr": "10.0.0.2", 00:21:07.541 "trsvcid": "4420" 00:21:07.541 }, 00:21:07.541 "peer_address": { 00:21:07.541 "trtype": "TCP", 00:21:07.541 "adrfam": "IPv4", 00:21:07.541 "traddr": "10.0.0.1", 00:21:07.541 "trsvcid": "35814" 00:21:07.541 }, 00:21:07.541 "auth": { 00:21:07.541 "state": "completed", 00:21:07.541 "digest": "sha384", 00:21:07.541 "dhgroup": "ffdhe6144" 00:21:07.541 } 00:21:07.541 } 00:21:07.541 ]' 00:21:07.541 20:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.800 20:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.800 20:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.800 20:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:07.800 20:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.800 20:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.800 20:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.800 20:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.058 20:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:21:08.992 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.992 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.992 20:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.992 20:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.992 20:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.992 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.992 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:08.992 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.251 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:09.251 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.251 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:09.251 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:09.251 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:09.251 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.251 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:09.251 20:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.251 20:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.251 20:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.251 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.251 20:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.815 00:21:09.816 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.816 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.816 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.074 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.074 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.074 20:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.074 20:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.074 20:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.074 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.074 { 00:21:10.074 "cntlid": 87, 00:21:10.074 "qid": 0, 00:21:10.074 "state": "enabled", 00:21:10.074 "thread": "nvmf_tgt_poll_group_000", 00:21:10.074 "listen_address": { 00:21:10.074 "trtype": "TCP", 00:21:10.074 "adrfam": "IPv4", 00:21:10.074 "traddr": "10.0.0.2", 00:21:10.074 "trsvcid": "4420" 00:21:10.074 }, 00:21:10.074 "peer_address": { 00:21:10.074 "trtype": "TCP", 00:21:10.074 "adrfam": "IPv4", 00:21:10.074 "traddr": "10.0.0.1", 00:21:10.074 "trsvcid": "35842" 00:21:10.074 }, 00:21:10.074 "auth": { 00:21:10.074 "state": "completed", 00:21:10.074 "digest": "sha384", 00:21:10.074 "dhgroup": "ffdhe6144" 00:21:10.074 } 00:21:10.074 } 00:21:10.074 ]' 00:21:10.074 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.074 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.074 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.074 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.074 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.332 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.332 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.332 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.590 20:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:21:11.522 20:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.522 20:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.522 20:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.522 20:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.522 20:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.522 20:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.522 20:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.522 20:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:11.522 20:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:11.779 20:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:11.779 20:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.779 20:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:11.779 20:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:11.779 20:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:11.779 20:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.779 20:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.779 20:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.779 20:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.779 20:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.779 20:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.779 20:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.711 00:21:12.711 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.711 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.711 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.969 { 00:21:12.969 "cntlid": 89, 00:21:12.969 "qid": 0, 00:21:12.969 "state": "enabled", 00:21:12.969 "thread": "nvmf_tgt_poll_group_000", 00:21:12.969 "listen_address": { 00:21:12.969 "trtype": "TCP", 00:21:12.969 "adrfam": "IPv4", 00:21:12.969 "traddr": "10.0.0.2", 00:21:12.969 "trsvcid": "4420" 00:21:12.969 }, 00:21:12.969 "peer_address": { 00:21:12.969 "trtype": "TCP", 00:21:12.969 "adrfam": "IPv4", 00:21:12.969 "traddr": "10.0.0.1", 00:21:12.969 "trsvcid": "35878" 00:21:12.969 }, 00:21:12.969 "auth": { 00:21:12.969 "state": "completed", 00:21:12.969 "digest": "sha384", 00:21:12.969 "dhgroup": "ffdhe8192" 00:21:12.969 } 00:21:12.969 } 00:21:12.969 ]' 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.969 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.226 20:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:21:14.158 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.158 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.158 20:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.158 20:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.158 20:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.158 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.158 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:14.158 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:14.415 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:14.415 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.415 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:14.415 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:14.415 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:14.415 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.415 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.415 20:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.415 20:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.415 20:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.415 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.415 20:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.348 00:21:15.348 20:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.348 20:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.348 20:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.605 20:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.605 20:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.605 20:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.605 20:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.605 20:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.605 20:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.605 { 00:21:15.605 "cntlid": 91, 00:21:15.605 "qid": 0, 00:21:15.605 "state": "enabled", 00:21:15.605 "thread": "nvmf_tgt_poll_group_000", 00:21:15.605 "listen_address": { 00:21:15.605 "trtype": "TCP", 00:21:15.605 "adrfam": "IPv4", 00:21:15.605 "traddr": "10.0.0.2", 00:21:15.605 "trsvcid": "4420" 00:21:15.605 }, 00:21:15.605 "peer_address": { 00:21:15.605 "trtype": "TCP", 00:21:15.605 "adrfam": "IPv4", 00:21:15.605 "traddr": "10.0.0.1", 00:21:15.605 "trsvcid": "46856" 00:21:15.605 }, 00:21:15.605 "auth": { 00:21:15.605 "state": "completed", 00:21:15.605 "digest": "sha384", 00:21:15.605 "dhgroup": "ffdhe8192" 00:21:15.605 } 00:21:15.605 } 00:21:15.605 ]' 00:21:15.605 20:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.605 20:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.605 20:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.605 20:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.605 20:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.862 20:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.862 20:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.862 20:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.119 20:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:21:17.051 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.051 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.051 20:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.051 20:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.051 20:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.051 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.051 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.051 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.307 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:17.307 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.307 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:17.307 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:17.307 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:17.307 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.307 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.307 20:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.307 20:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.307 20:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.308 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.308 20:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.238 00:21:18.238 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.238 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.238 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.494 { 00:21:18.494 "cntlid": 93, 00:21:18.494 "qid": 0, 00:21:18.494 "state": "enabled", 00:21:18.494 "thread": "nvmf_tgt_poll_group_000", 00:21:18.494 "listen_address": { 00:21:18.494 "trtype": "TCP", 00:21:18.494 "adrfam": "IPv4", 00:21:18.494 "traddr": "10.0.0.2", 00:21:18.494 "trsvcid": "4420" 00:21:18.494 }, 00:21:18.494 "peer_address": { 00:21:18.494 "trtype": "TCP", 00:21:18.494 "adrfam": "IPv4", 00:21:18.494 "traddr": "10.0.0.1", 00:21:18.494 "trsvcid": "46886" 00:21:18.494 }, 00:21:18.494 "auth": { 00:21:18.494 "state": "completed", 00:21:18.494 "digest": "sha384", 00:21:18.494 "dhgroup": "ffdhe8192" 00:21:18.494 } 00:21:18.494 } 00:21:18.494 ]' 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.494 20:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.751 20:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:21:19.684 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.684 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.684 20:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.684 20:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.684 20:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.684 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.684 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:19.684 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.250 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:20.250 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.250 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.250 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:20.250 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:20.250 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.250 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:20.250 20:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.250 20:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.250 20:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.250 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.250 20:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.189 00:21:21.189 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.189 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.189 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.189 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.189 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.189 20:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.189 20:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.189 20:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.189 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.189 { 00:21:21.189 "cntlid": 95, 00:21:21.189 "qid": 0, 00:21:21.189 "state": "enabled", 00:21:21.189 "thread": "nvmf_tgt_poll_group_000", 00:21:21.189 "listen_address": { 00:21:21.189 "trtype": "TCP", 00:21:21.189 "adrfam": "IPv4", 00:21:21.189 "traddr": "10.0.0.2", 00:21:21.189 "trsvcid": "4420" 00:21:21.189 }, 00:21:21.189 "peer_address": { 00:21:21.189 "trtype": "TCP", 00:21:21.189 "adrfam": "IPv4", 00:21:21.189 "traddr": "10.0.0.1", 00:21:21.189 "trsvcid": "46928" 00:21:21.189 }, 00:21:21.189 "auth": { 00:21:21.189 "state": "completed", 00:21:21.189 "digest": "sha384", 00:21:21.189 "dhgroup": "ffdhe8192" 00:21:21.189 } 00:21:21.189 } 00:21:21.189 ]' 00:21:21.189 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.521 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.521 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.521 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.521 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.521 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.521 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.521 20:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.779 20:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:21:22.713 20:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.713 20:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.713 20:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.713 20:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.713 20:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.713 20:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:22.713 20:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.713 20:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.713 20:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:22.714 20:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:22.714 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:22.714 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.714 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.714 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:22.714 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:22.714 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.714 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.714 20:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.714 20:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.971 20:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.971 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.971 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.230 00:21:23.230 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.230 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.230 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.488 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.488 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.488 20:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.488 20:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.488 20:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.488 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.488 { 00:21:23.488 "cntlid": 97, 00:21:23.488 "qid": 0, 00:21:23.488 "state": "enabled", 00:21:23.488 "thread": "nvmf_tgt_poll_group_000", 00:21:23.488 "listen_address": { 00:21:23.488 "trtype": "TCP", 00:21:23.488 "adrfam": "IPv4", 00:21:23.488 "traddr": "10.0.0.2", 00:21:23.488 "trsvcid": "4420" 00:21:23.488 }, 00:21:23.488 "peer_address": { 00:21:23.489 "trtype": "TCP", 00:21:23.489 "adrfam": "IPv4", 00:21:23.489 "traddr": "10.0.0.1", 00:21:23.489 "trsvcid": "42628" 00:21:23.489 }, 00:21:23.489 "auth": { 00:21:23.489 "state": "completed", 00:21:23.489 "digest": "sha512", 00:21:23.489 "dhgroup": "null" 00:21:23.489 } 00:21:23.489 } 00:21:23.489 ]' 00:21:23.489 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.489 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.489 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.489 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:23.489 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.489 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.489 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.489 20:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.746 20:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:21:24.679 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.679 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.679 20:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.679 20:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.679 20:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.679 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.679 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.679 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.936 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:24.936 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.936 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.936 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:24.936 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:24.936 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.936 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.936 20:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.936 20:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.936 20:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.936 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.936 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.193 00:21:25.193 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.193 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.193 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.451 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.451 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.451 20:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.451 20:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.451 20:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.451 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.451 { 00:21:25.451 "cntlid": 99, 00:21:25.451 "qid": 0, 00:21:25.451 "state": "enabled", 00:21:25.451 "thread": "nvmf_tgt_poll_group_000", 00:21:25.451 "listen_address": { 00:21:25.451 "trtype": "TCP", 00:21:25.451 "adrfam": "IPv4", 00:21:25.451 "traddr": "10.0.0.2", 00:21:25.451 "trsvcid": "4420" 00:21:25.451 }, 00:21:25.451 "peer_address": { 00:21:25.451 "trtype": "TCP", 00:21:25.451 "adrfam": "IPv4", 00:21:25.451 "traddr": "10.0.0.1", 00:21:25.451 "trsvcid": "42654" 00:21:25.451 }, 00:21:25.451 "auth": { 00:21:25.451 "state": "completed", 00:21:25.451 "digest": "sha512", 00:21:25.451 "dhgroup": "null" 00:21:25.451 } 00:21:25.451 } 00:21:25.451 ]' 00:21:25.451 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.451 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.451 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.709 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:25.709 20:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.709 20:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.709 20:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.709 20:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.966 20:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:21:26.899 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.899 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.899 20:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.899 20:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.899 20:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.899 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.899 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:26.899 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:27.158 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:27.158 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.158 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:27.158 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:27.158 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:27.158 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.158 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.158 20:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.158 20:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.158 20:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.158 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.158 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.416 00:21:27.416 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.416 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.416 20:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.673 { 00:21:27.673 "cntlid": 101, 00:21:27.673 "qid": 0, 00:21:27.673 "state": "enabled", 00:21:27.673 "thread": "nvmf_tgt_poll_group_000", 00:21:27.673 "listen_address": { 00:21:27.673 "trtype": "TCP", 00:21:27.673 "adrfam": "IPv4", 00:21:27.673 "traddr": "10.0.0.2", 00:21:27.673 "trsvcid": "4420" 00:21:27.673 }, 00:21:27.673 "peer_address": { 00:21:27.673 "trtype": "TCP", 00:21:27.673 "adrfam": "IPv4", 00:21:27.673 "traddr": "10.0.0.1", 00:21:27.673 "trsvcid": "42676" 00:21:27.673 }, 00:21:27.673 "auth": { 00:21:27.673 "state": "completed", 00:21:27.673 "digest": "sha512", 00:21:27.673 "dhgroup": "null" 00:21:27.673 } 00:21:27.673 } 00:21:27.673 ]' 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.673 20:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.931 20:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:21:28.864 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.121 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.121 20:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.121 20:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.121 20:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.121 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.121 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:29.121 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:29.379 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:29.379 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:29.379 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:29.379 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:29.380 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:29.380 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.380 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:29.380 20:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.380 20:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.380 20:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.380 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.380 20:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.638 00:21:29.638 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.638 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.638 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.895 { 00:21:29.895 "cntlid": 103, 00:21:29.895 "qid": 0, 00:21:29.895 "state": "enabled", 00:21:29.895 "thread": "nvmf_tgt_poll_group_000", 00:21:29.895 "listen_address": { 00:21:29.895 "trtype": "TCP", 00:21:29.895 "adrfam": "IPv4", 00:21:29.895 "traddr": "10.0.0.2", 00:21:29.895 "trsvcid": "4420" 00:21:29.895 }, 00:21:29.895 "peer_address": { 00:21:29.895 "trtype": "TCP", 00:21:29.895 "adrfam": "IPv4", 00:21:29.895 "traddr": "10.0.0.1", 00:21:29.895 "trsvcid": "42698" 00:21:29.895 }, 00:21:29.895 "auth": { 00:21:29.895 "state": "completed", 00:21:29.895 "digest": "sha512", 00:21:29.895 "dhgroup": "null" 00:21:29.895 } 00:21:29.895 } 00:21:29.895 ]' 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.895 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.153 20:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:21:31.087 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.345 20:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.603 20:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.603 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.603 20:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.861 00:21:31.861 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.861 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.861 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.119 { 00:21:32.119 "cntlid": 105, 00:21:32.119 "qid": 0, 00:21:32.119 "state": "enabled", 00:21:32.119 "thread": "nvmf_tgt_poll_group_000", 00:21:32.119 "listen_address": { 00:21:32.119 "trtype": "TCP", 00:21:32.119 "adrfam": "IPv4", 00:21:32.119 "traddr": "10.0.0.2", 00:21:32.119 "trsvcid": "4420" 00:21:32.119 }, 00:21:32.119 "peer_address": { 00:21:32.119 "trtype": "TCP", 00:21:32.119 "adrfam": "IPv4", 00:21:32.119 "traddr": "10.0.0.1", 00:21:32.119 "trsvcid": "42714" 00:21:32.119 }, 00:21:32.119 "auth": { 00:21:32.119 "state": "completed", 00:21:32.119 "digest": "sha512", 00:21:32.119 "dhgroup": "ffdhe2048" 00:21:32.119 } 00:21:32.119 } 00:21:32.119 ]' 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.119 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.377 20:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:21:33.314 20:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.314 20:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.314 20:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.314 20:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.314 20:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.314 20:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.314 20:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:33.314 20:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:33.571 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:33.571 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.571 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.571 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:33.571 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:33.571 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.571 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.571 20:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.571 20:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.571 20:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.571 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.571 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.137 00:21:34.137 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.137 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.137 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.137 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.137 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.137 20:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.137 20:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.137 20:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.137 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.137 { 00:21:34.137 "cntlid": 107, 00:21:34.137 "qid": 0, 00:21:34.137 "state": "enabled", 00:21:34.137 "thread": "nvmf_tgt_poll_group_000", 00:21:34.137 "listen_address": { 00:21:34.137 "trtype": "TCP", 00:21:34.137 "adrfam": "IPv4", 00:21:34.137 "traddr": "10.0.0.2", 00:21:34.137 "trsvcid": "4420" 00:21:34.137 }, 00:21:34.137 "peer_address": { 00:21:34.137 "trtype": "TCP", 00:21:34.137 "adrfam": "IPv4", 00:21:34.137 "traddr": "10.0.0.1", 00:21:34.137 "trsvcid": "38024" 00:21:34.137 }, 00:21:34.137 "auth": { 00:21:34.137 "state": "completed", 00:21:34.137 "digest": "sha512", 00:21:34.137 "dhgroup": "ffdhe2048" 00:21:34.137 } 00:21:34.137 } 00:21:34.137 ]' 00:21:34.137 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.394 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.394 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.394 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:34.394 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.394 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.394 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.394 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.650 20:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:21:35.583 20:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.583 20:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.583 20:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.583 20:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.583 20:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.583 20:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.583 20:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:35.583 20:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:35.870 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:35.870 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.870 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.870 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:35.870 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:35.870 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.870 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.870 20:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.870 20:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.870 20:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.870 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.870 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.127 00:21:36.127 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.127 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.127 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.385 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.385 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.385 20:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.385 20:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.385 20:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.385 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.385 { 00:21:36.385 "cntlid": 109, 00:21:36.385 "qid": 0, 00:21:36.385 "state": "enabled", 00:21:36.385 "thread": "nvmf_tgt_poll_group_000", 00:21:36.385 "listen_address": { 00:21:36.385 "trtype": "TCP", 00:21:36.385 "adrfam": "IPv4", 00:21:36.385 "traddr": "10.0.0.2", 00:21:36.385 "trsvcid": "4420" 00:21:36.385 }, 00:21:36.385 "peer_address": { 00:21:36.385 "trtype": "TCP", 00:21:36.385 "adrfam": "IPv4", 00:21:36.385 "traddr": "10.0.0.1", 00:21:36.385 "trsvcid": "38044" 00:21:36.385 }, 00:21:36.385 "auth": { 00:21:36.385 "state": "completed", 00:21:36.385 "digest": "sha512", 00:21:36.385 "dhgroup": "ffdhe2048" 00:21:36.385 } 00:21:36.385 } 00:21:36.385 ]' 00:21:36.385 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.385 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.385 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.385 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:36.385 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.643 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.643 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.643 20:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.643 20:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:21:37.577 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.577 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.577 20:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.577 20:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.837 20:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.837 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.837 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.837 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.837 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:37.837 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.837 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.837 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:37.837 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:37.837 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.837 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:37.837 20:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.837 20:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.096 20:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.096 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:38.096 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:38.353 00:21:38.353 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.353 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.353 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.611 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.611 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.611 20:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.611 20:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.611 20:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.611 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.611 { 00:21:38.611 "cntlid": 111, 00:21:38.611 "qid": 0, 00:21:38.611 "state": "enabled", 00:21:38.611 "thread": "nvmf_tgt_poll_group_000", 00:21:38.611 "listen_address": { 00:21:38.611 "trtype": "TCP", 00:21:38.611 "adrfam": "IPv4", 00:21:38.611 "traddr": "10.0.0.2", 00:21:38.611 "trsvcid": "4420" 00:21:38.611 }, 00:21:38.611 "peer_address": { 00:21:38.611 "trtype": "TCP", 00:21:38.611 "adrfam": "IPv4", 00:21:38.611 "traddr": "10.0.0.1", 00:21:38.611 "trsvcid": "38070" 00:21:38.611 }, 00:21:38.611 "auth": { 00:21:38.611 "state": "completed", 00:21:38.611 "digest": "sha512", 00:21:38.611 "dhgroup": "ffdhe2048" 00:21:38.611 } 00:21:38.611 } 00:21:38.611 ]' 00:21:38.611 20:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.611 20:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.612 20:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.612 20:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:38.612 20:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.612 20:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.612 20:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.612 20:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.870 20:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:21:39.805 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.065 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.065 20:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.065 20:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.065 20:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.065 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.065 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.065 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:40.065 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:40.325 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:40.325 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.325 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.325 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:40.325 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:40.325 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.325 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.325 20:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.325 20:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.325 20:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.325 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.325 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.584 00:21:40.584 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.584 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.584 20:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.843 { 00:21:40.843 "cntlid": 113, 00:21:40.843 "qid": 0, 00:21:40.843 "state": "enabled", 00:21:40.843 "thread": "nvmf_tgt_poll_group_000", 00:21:40.843 "listen_address": { 00:21:40.843 "trtype": "TCP", 00:21:40.843 "adrfam": "IPv4", 00:21:40.843 "traddr": "10.0.0.2", 00:21:40.843 "trsvcid": "4420" 00:21:40.843 }, 00:21:40.843 "peer_address": { 00:21:40.843 "trtype": "TCP", 00:21:40.843 "adrfam": "IPv4", 00:21:40.843 "traddr": "10.0.0.1", 00:21:40.843 "trsvcid": "38092" 00:21:40.843 }, 00:21:40.843 "auth": { 00:21:40.843 "state": "completed", 00:21:40.843 "digest": "sha512", 00:21:40.843 "dhgroup": "ffdhe3072" 00:21:40.843 } 00:21:40.843 } 00:21:40.843 ]' 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.843 20:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.104 20:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:21:42.039 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.039 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.039 20:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.039 20:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.039 20:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.039 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.039 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:42.039 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:42.297 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:42.297 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.297 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.297 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:42.297 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:42.298 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.298 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.298 20:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.298 20:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.298 20:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.298 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.298 20:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.864 00:21:42.864 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.864 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.864 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.864 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.864 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.864 20:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.864 20:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.864 20:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.864 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.864 { 00:21:42.864 "cntlid": 115, 00:21:42.864 "qid": 0, 00:21:42.864 "state": "enabled", 00:21:42.864 "thread": "nvmf_tgt_poll_group_000", 00:21:42.864 "listen_address": { 00:21:42.864 "trtype": "TCP", 00:21:42.864 "adrfam": "IPv4", 00:21:42.864 "traddr": "10.0.0.2", 00:21:42.864 "trsvcid": "4420" 00:21:42.864 }, 00:21:42.864 "peer_address": { 00:21:42.864 "trtype": "TCP", 00:21:42.864 "adrfam": "IPv4", 00:21:42.864 "traddr": "10.0.0.1", 00:21:42.864 "trsvcid": "38128" 00:21:42.864 }, 00:21:42.864 "auth": { 00:21:42.864 "state": "completed", 00:21:42.864 "digest": "sha512", 00:21:42.864 "dhgroup": "ffdhe3072" 00:21:42.864 } 00:21:42.864 } 00:21:42.864 ]' 00:21:42.864 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.122 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.122 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.122 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:43.122 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.122 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.122 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.122 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.380 20:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:21:44.319 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.319 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.319 20:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.319 20:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.319 20:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.319 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.319 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:44.319 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:44.577 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:44.577 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.577 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.577 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:44.577 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:44.577 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.577 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.577 20:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.577 20:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.577 20:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.577 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.577 20:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.834 00:21:44.834 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.834 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.834 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.092 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.092 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.092 20:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.092 20:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.092 20:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.092 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.092 { 00:21:45.092 "cntlid": 117, 00:21:45.092 "qid": 0, 00:21:45.092 "state": "enabled", 00:21:45.092 "thread": "nvmf_tgt_poll_group_000", 00:21:45.092 "listen_address": { 00:21:45.092 "trtype": "TCP", 00:21:45.092 "adrfam": "IPv4", 00:21:45.092 "traddr": "10.0.0.2", 00:21:45.092 "trsvcid": "4420" 00:21:45.092 }, 00:21:45.092 "peer_address": { 00:21:45.092 "trtype": "TCP", 00:21:45.092 "adrfam": "IPv4", 00:21:45.092 "traddr": "10.0.0.1", 00:21:45.092 "trsvcid": "60062" 00:21:45.092 }, 00:21:45.092 "auth": { 00:21:45.092 "state": "completed", 00:21:45.092 "digest": "sha512", 00:21:45.092 "dhgroup": "ffdhe3072" 00:21:45.092 } 00:21:45.092 } 00:21:45.092 ]' 00:21:45.092 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.349 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.349 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.349 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:45.349 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.349 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.349 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.349 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.606 20:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:21:46.543 20:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.543 20:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.543 20:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.543 20:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.543 20:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.543 20:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.543 20:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.543 20:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.807 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:46.808 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.808 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:46.808 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:46.808 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:46.808 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.808 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:46.808 20:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.808 20:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.808 20:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.808 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.808 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.377 00:21:47.377 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.377 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.377 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.377 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.377 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.377 20:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.377 20:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.377 20:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.377 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.377 { 00:21:47.377 "cntlid": 119, 00:21:47.377 "qid": 0, 00:21:47.377 "state": "enabled", 00:21:47.377 "thread": "nvmf_tgt_poll_group_000", 00:21:47.377 "listen_address": { 00:21:47.377 "trtype": "TCP", 00:21:47.377 "adrfam": "IPv4", 00:21:47.377 "traddr": "10.0.0.2", 00:21:47.377 "trsvcid": "4420" 00:21:47.377 }, 00:21:47.377 "peer_address": { 00:21:47.377 "trtype": "TCP", 00:21:47.377 "adrfam": "IPv4", 00:21:47.377 "traddr": "10.0.0.1", 00:21:47.377 "trsvcid": "60084" 00:21:47.377 }, 00:21:47.377 "auth": { 00:21:47.377 "state": "completed", 00:21:47.377 "digest": "sha512", 00:21:47.377 "dhgroup": "ffdhe3072" 00:21:47.377 } 00:21:47.377 } 00:21:47.377 ]' 00:21:47.377 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.635 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.635 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.635 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:47.635 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.635 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.635 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.635 20:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.892 20:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:21:48.827 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.827 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.827 20:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.827 20:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.827 20:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.827 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.827 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.827 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:48.827 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:49.085 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:49.085 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.085 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.085 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:49.085 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:49.085 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.085 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.085 20:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.085 20:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 20:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.085 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.085 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.651 00:21:49.651 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.651 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.651 20:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.651 20:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.651 20:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.652 20:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.652 20:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.652 20:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.652 20:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.652 { 00:21:49.652 "cntlid": 121, 00:21:49.652 "qid": 0, 00:21:49.652 "state": "enabled", 00:21:49.652 "thread": "nvmf_tgt_poll_group_000", 00:21:49.652 "listen_address": { 00:21:49.652 "trtype": "TCP", 00:21:49.652 "adrfam": "IPv4", 00:21:49.652 "traddr": "10.0.0.2", 00:21:49.652 "trsvcid": "4420" 00:21:49.652 }, 00:21:49.652 "peer_address": { 00:21:49.652 "trtype": "TCP", 00:21:49.652 "adrfam": "IPv4", 00:21:49.652 "traddr": "10.0.0.1", 00:21:49.652 "trsvcid": "60100" 00:21:49.652 }, 00:21:49.652 "auth": { 00:21:49.652 "state": "completed", 00:21:49.652 "digest": "sha512", 00:21:49.652 "dhgroup": "ffdhe4096" 00:21:49.652 } 00:21:49.652 } 00:21:49.652 ]' 00:21:49.652 20:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.909 20:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.909 20:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.909 20:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:49.909 20:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.909 20:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.909 20:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.909 20:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.166 20:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:21:51.159 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.159 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.159 20:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.159 20:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.159 20:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.159 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.159 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:51.159 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:51.416 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:51.416 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.416 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.416 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:51.416 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:51.416 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.416 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.416 20:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.416 20:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.416 20:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.416 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.416 20:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.673 00:21:51.932 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.932 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.932 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.189 { 00:21:52.189 "cntlid": 123, 00:21:52.189 "qid": 0, 00:21:52.189 "state": "enabled", 00:21:52.189 "thread": "nvmf_tgt_poll_group_000", 00:21:52.189 "listen_address": { 00:21:52.189 "trtype": "TCP", 00:21:52.189 "adrfam": "IPv4", 00:21:52.189 "traddr": "10.0.0.2", 00:21:52.189 "trsvcid": "4420" 00:21:52.189 }, 00:21:52.189 "peer_address": { 00:21:52.189 "trtype": "TCP", 00:21:52.189 "adrfam": "IPv4", 00:21:52.189 "traddr": "10.0.0.1", 00:21:52.189 "trsvcid": "60132" 00:21:52.189 }, 00:21:52.189 "auth": { 00:21:52.189 "state": "completed", 00:21:52.189 "digest": "sha512", 00:21:52.189 "dhgroup": "ffdhe4096" 00:21:52.189 } 00:21:52.189 } 00:21:52.189 ]' 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.189 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.446 20:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:21:53.383 20:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.383 20:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.383 20:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.383 20:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.383 20:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.383 20:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.383 20:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.384 20:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.642 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:53.642 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.642 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.642 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:53.642 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:53.642 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.642 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.642 20:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.642 20:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.642 20:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.642 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.642 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.210 00:21:54.210 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.210 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.210 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.468 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.468 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.468 20:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.468 20:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.468 20:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.468 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.468 { 00:21:54.468 "cntlid": 125, 00:21:54.468 "qid": 0, 00:21:54.468 "state": "enabled", 00:21:54.468 "thread": "nvmf_tgt_poll_group_000", 00:21:54.468 "listen_address": { 00:21:54.468 "trtype": "TCP", 00:21:54.468 "adrfam": "IPv4", 00:21:54.468 "traddr": "10.0.0.2", 00:21:54.468 "trsvcid": "4420" 00:21:54.468 }, 00:21:54.468 "peer_address": { 00:21:54.468 "trtype": "TCP", 00:21:54.468 "adrfam": "IPv4", 00:21:54.468 "traddr": "10.0.0.1", 00:21:54.468 "trsvcid": "59686" 00:21:54.468 }, 00:21:54.468 "auth": { 00:21:54.468 "state": "completed", 00:21:54.468 "digest": "sha512", 00:21:54.468 "dhgroup": "ffdhe4096" 00:21:54.468 } 00:21:54.468 } 00:21:54.468 ]' 00:21:54.468 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.468 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.468 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.468 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:54.468 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.469 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.469 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.469 20:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.726 20:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:21:55.658 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.658 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.658 20:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.658 20:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.917 20:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.917 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.917 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.917 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.917 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:55.917 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.917 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:55.917 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:55.917 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:55.917 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.917 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:55.917 20:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.917 20:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.177 20:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.177 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.177 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.435 00:21:56.435 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.435 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.435 20:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.693 { 00:21:56.693 "cntlid": 127, 00:21:56.693 "qid": 0, 00:21:56.693 "state": "enabled", 00:21:56.693 "thread": "nvmf_tgt_poll_group_000", 00:21:56.693 "listen_address": { 00:21:56.693 "trtype": "TCP", 00:21:56.693 "adrfam": "IPv4", 00:21:56.693 "traddr": "10.0.0.2", 00:21:56.693 "trsvcid": "4420" 00:21:56.693 }, 00:21:56.693 "peer_address": { 00:21:56.693 "trtype": "TCP", 00:21:56.693 "adrfam": "IPv4", 00:21:56.693 "traddr": "10.0.0.1", 00:21:56.693 "trsvcid": "59716" 00:21:56.693 }, 00:21:56.693 "auth": { 00:21:56.693 "state": "completed", 00:21:56.693 "digest": "sha512", 00:21:56.693 "dhgroup": "ffdhe4096" 00:21:56.693 } 00:21:56.693 } 00:21:56.693 ]' 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.693 20:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.951 20:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.328 20:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.894 00:21:58.894 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.894 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.894 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.152 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.152 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.152 20:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.152 20:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.152 20:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.152 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.152 { 00:21:59.152 "cntlid": 129, 00:21:59.152 "qid": 0, 00:21:59.152 "state": "enabled", 00:21:59.152 "thread": "nvmf_tgt_poll_group_000", 00:21:59.152 "listen_address": { 00:21:59.152 "trtype": "TCP", 00:21:59.153 "adrfam": "IPv4", 00:21:59.153 "traddr": "10.0.0.2", 00:21:59.153 "trsvcid": "4420" 00:21:59.153 }, 00:21:59.153 "peer_address": { 00:21:59.153 "trtype": "TCP", 00:21:59.153 "adrfam": "IPv4", 00:21:59.153 "traddr": "10.0.0.1", 00:21:59.153 "trsvcid": "59748" 00:21:59.153 }, 00:21:59.153 "auth": { 00:21:59.153 "state": "completed", 00:21:59.153 "digest": "sha512", 00:21:59.153 "dhgroup": "ffdhe6144" 00:21:59.153 } 00:21:59.153 } 00:21:59.153 ]' 00:21:59.153 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.153 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.153 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.153 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:59.153 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.153 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.153 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.153 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.412 20:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:22:00.348 20:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.348 20:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.348 20:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.348 20:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.348 20:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.348 20:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.348 20:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:00.348 20:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:00.608 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:00.608 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.608 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:00.608 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:00.608 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:00.608 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.608 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.608 20:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.608 20:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.608 20:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.608 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.608 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.175 00:22:01.434 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.434 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.434 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.692 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.692 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.692 20:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.692 20:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.692 20:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.692 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.692 { 00:22:01.692 "cntlid": 131, 00:22:01.692 "qid": 0, 00:22:01.692 "state": "enabled", 00:22:01.692 "thread": "nvmf_tgt_poll_group_000", 00:22:01.692 "listen_address": { 00:22:01.692 "trtype": "TCP", 00:22:01.692 "adrfam": "IPv4", 00:22:01.692 "traddr": "10.0.0.2", 00:22:01.692 "trsvcid": "4420" 00:22:01.692 }, 00:22:01.692 "peer_address": { 00:22:01.692 "trtype": "TCP", 00:22:01.692 "adrfam": "IPv4", 00:22:01.692 "traddr": "10.0.0.1", 00:22:01.692 "trsvcid": "59774" 00:22:01.692 }, 00:22:01.692 "auth": { 00:22:01.692 "state": "completed", 00:22:01.692 "digest": "sha512", 00:22:01.692 "dhgroup": "ffdhe6144" 00:22:01.692 } 00:22:01.692 } 00:22:01.692 ]' 00:22:01.692 20:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.692 20:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.692 20:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.692 20:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:01.692 20:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.692 20:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.692 20:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.692 20:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.950 20:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:22:02.886 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.886 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.886 20:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.886 20:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.886 20:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.886 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.886 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.886 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.144 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:03.144 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.144 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.144 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:03.144 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:03.144 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.144 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.144 20:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.144 20:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.144 20:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.144 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.144 20:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.712 00:22:03.712 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.712 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.712 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.970 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.970 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.970 20:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.970 20:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.970 20:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.970 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.970 { 00:22:03.970 "cntlid": 133, 00:22:03.970 "qid": 0, 00:22:03.970 "state": "enabled", 00:22:03.970 "thread": "nvmf_tgt_poll_group_000", 00:22:03.970 "listen_address": { 00:22:03.970 "trtype": "TCP", 00:22:03.970 "adrfam": "IPv4", 00:22:03.970 "traddr": "10.0.0.2", 00:22:03.970 "trsvcid": "4420" 00:22:03.970 }, 00:22:03.970 "peer_address": { 00:22:03.970 "trtype": "TCP", 00:22:03.970 "adrfam": "IPv4", 00:22:03.970 "traddr": "10.0.0.1", 00:22:03.970 "trsvcid": "40694" 00:22:03.970 }, 00:22:03.970 "auth": { 00:22:03.970 "state": "completed", 00:22:03.970 "digest": "sha512", 00:22:03.970 "dhgroup": "ffdhe6144" 00:22:03.970 } 00:22:03.970 } 00:22:03.970 ]' 00:22:03.970 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.970 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.970 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.970 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:03.970 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.970 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.970 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.228 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.488 20:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:22:05.420 20:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.420 20:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.420 20:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.420 20:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.420 20:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.420 20:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.420 20:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:05.420 20:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:05.678 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:05.678 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.678 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.678 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:05.678 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:05.678 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.678 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:05.678 20:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.678 20:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.678 20:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.678 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.678 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.260 00:22:06.260 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.260 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.260 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.517 { 00:22:06.517 "cntlid": 135, 00:22:06.517 "qid": 0, 00:22:06.517 "state": "enabled", 00:22:06.517 "thread": "nvmf_tgt_poll_group_000", 00:22:06.517 "listen_address": { 00:22:06.517 "trtype": "TCP", 00:22:06.517 "adrfam": "IPv4", 00:22:06.517 "traddr": "10.0.0.2", 00:22:06.517 "trsvcid": "4420" 00:22:06.517 }, 00:22:06.517 "peer_address": { 00:22:06.517 "trtype": "TCP", 00:22:06.517 "adrfam": "IPv4", 00:22:06.517 "traddr": "10.0.0.1", 00:22:06.517 "trsvcid": "40728" 00:22:06.517 }, 00:22:06.517 "auth": { 00:22:06.517 "state": "completed", 00:22:06.517 "digest": "sha512", 00:22:06.517 "dhgroup": "ffdhe6144" 00:22:06.517 } 00:22:06.517 } 00:22:06.517 ]' 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.517 20:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.776 20:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:22:07.707 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.707 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.707 20:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.707 20:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.708 20:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.708 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.708 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.708 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.708 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.965 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:07.965 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.965 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.965 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:07.965 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:07.965 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.965 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.965 20:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.965 20:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.965 20:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.965 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.965 20:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.899 00:22:08.899 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.899 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.899 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.156 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.156 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.156 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.156 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.156 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.156 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.156 { 00:22:09.156 "cntlid": 137, 00:22:09.156 "qid": 0, 00:22:09.156 "state": "enabled", 00:22:09.156 "thread": "nvmf_tgt_poll_group_000", 00:22:09.156 "listen_address": { 00:22:09.156 "trtype": "TCP", 00:22:09.156 "adrfam": "IPv4", 00:22:09.156 "traddr": "10.0.0.2", 00:22:09.156 "trsvcid": "4420" 00:22:09.156 }, 00:22:09.156 "peer_address": { 00:22:09.156 "trtype": "TCP", 00:22:09.156 "adrfam": "IPv4", 00:22:09.156 "traddr": "10.0.0.1", 00:22:09.156 "trsvcid": "40762" 00:22:09.156 }, 00:22:09.156 "auth": { 00:22:09.156 "state": "completed", 00:22:09.156 "digest": "sha512", 00:22:09.156 "dhgroup": "ffdhe8192" 00:22:09.156 } 00:22:09.156 } 00:22:09.156 ]' 00:22:09.156 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.413 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.413 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.413 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.413 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.413 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.413 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.413 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.670 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:22:10.602 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.602 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.602 20:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.602 20:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.602 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.602 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:10.602 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.602 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.862 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:10.862 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.862 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.862 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:10.863 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:10.863 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.863 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.863 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.863 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.863 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.863 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.863 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.798 00:22:11.798 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.798 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.798 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.056 { 00:22:12.056 "cntlid": 139, 00:22:12.056 "qid": 0, 00:22:12.056 "state": "enabled", 00:22:12.056 "thread": "nvmf_tgt_poll_group_000", 00:22:12.056 "listen_address": { 00:22:12.056 "trtype": "TCP", 00:22:12.056 "adrfam": "IPv4", 00:22:12.056 "traddr": "10.0.0.2", 00:22:12.056 "trsvcid": "4420" 00:22:12.056 }, 00:22:12.056 "peer_address": { 00:22:12.056 "trtype": "TCP", 00:22:12.056 "adrfam": "IPv4", 00:22:12.056 "traddr": "10.0.0.1", 00:22:12.056 "trsvcid": "40792" 00:22:12.056 }, 00:22:12.056 "auth": { 00:22:12.056 "state": "completed", 00:22:12.056 "digest": "sha512", 00:22:12.056 "dhgroup": "ffdhe8192" 00:22:12.056 } 00:22:12.056 } 00:22:12.056 ]' 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.056 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.314 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTdlYTJhMDJhNzAyNmMzMGJiMTc1YjA1YWYwMzcxZWaVPqur: --dhchap-ctrl-secret DHHC-1:02:MzAyNjRlZDRlNjViN2EyOTk0Njc5NTY1ZWNkNjFmNzlhOGMyMWE1NDBmZDg2MzcxhRdzRQ==: 00:22:13.692 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.692 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.692 20:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.692 20:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.692 20:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.692 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.692 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.692 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.692 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:13.692 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.692 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:13.692 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:13.692 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:13.692 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.692 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.692 20:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.692 20:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.692 20:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.692 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.692 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.625 00:22:14.625 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.625 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.625 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.883 { 00:22:14.883 "cntlid": 141, 00:22:14.883 "qid": 0, 00:22:14.883 "state": "enabled", 00:22:14.883 "thread": "nvmf_tgt_poll_group_000", 00:22:14.883 "listen_address": { 00:22:14.883 "trtype": "TCP", 00:22:14.883 "adrfam": "IPv4", 00:22:14.883 "traddr": "10.0.0.2", 00:22:14.883 "trsvcid": "4420" 00:22:14.883 }, 00:22:14.883 "peer_address": { 00:22:14.883 "trtype": "TCP", 00:22:14.883 "adrfam": "IPv4", 00:22:14.883 "traddr": "10.0.0.1", 00:22:14.883 "trsvcid": "45246" 00:22:14.883 }, 00:22:14.883 "auth": { 00:22:14.883 "state": "completed", 00:22:14.883 "digest": "sha512", 00:22:14.883 "dhgroup": "ffdhe8192" 00:22:14.883 } 00:22:14.883 } 00:22:14.883 ]' 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.883 20:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.141 20:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTE1YzM0MWU1MjFhNzJlZmU3MmMyY2RiZDNiZmNkZjE0MjQxNDA3ZDY2NTczMWJi3MJujQ==: --dhchap-ctrl-secret DHHC-1:01:OGYwMzUxNzM0ZjhhNzY1NGJhZDA4ZjFmNjg4OGQxNzSjPQlN: 00:22:16.076 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.076 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.076 20:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.076 20:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.076 20:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.076 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.076 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.076 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.646 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:16.646 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.646 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.646 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:16.646 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:16.646 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.646 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:16.646 20:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.646 20:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.646 20:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.646 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.646 20:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.581 00:22:17.582 20:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:17.582 20:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.582 20:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:17.582 20:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.582 20:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.582 20:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.582 20:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.582 20:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.582 20:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.582 { 00:22:17.582 "cntlid": 143, 00:22:17.582 "qid": 0, 00:22:17.582 "state": "enabled", 00:22:17.582 "thread": "nvmf_tgt_poll_group_000", 00:22:17.582 "listen_address": { 00:22:17.582 "trtype": "TCP", 00:22:17.582 "adrfam": "IPv4", 00:22:17.582 "traddr": "10.0.0.2", 00:22:17.582 "trsvcid": "4420" 00:22:17.582 }, 00:22:17.582 "peer_address": { 00:22:17.582 "trtype": "TCP", 00:22:17.582 "adrfam": "IPv4", 00:22:17.582 "traddr": "10.0.0.1", 00:22:17.582 "trsvcid": "45282" 00:22:17.582 }, 00:22:17.582 "auth": { 00:22:17.582 "state": "completed", 00:22:17.582 "digest": "sha512", 00:22:17.582 "dhgroup": "ffdhe8192" 00:22:17.582 } 00:22:17.582 } 00:22:17.582 ]' 00:22:17.582 20:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.582 20:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.582 20:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.839 20:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.839 20:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.839 20:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.839 20:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.839 20:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.097 20:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:22:19.034 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.034 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.034 20:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.034 20:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.034 20:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.034 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:19.034 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:19.034 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:19.034 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.034 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.034 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.292 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:19.292 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:19.292 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:19.292 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:19.292 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:19.292 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.292 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.292 20:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.292 20:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.292 20:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.292 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.292 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.229 00:22:20.229 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:20.229 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.229 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.485 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.485 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.485 20:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.485 20:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.485 20:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.485 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.485 { 00:22:20.485 "cntlid": 145, 00:22:20.485 "qid": 0, 00:22:20.485 "state": "enabled", 00:22:20.485 "thread": "nvmf_tgt_poll_group_000", 00:22:20.485 "listen_address": { 00:22:20.485 "trtype": "TCP", 00:22:20.485 "adrfam": "IPv4", 00:22:20.485 "traddr": "10.0.0.2", 00:22:20.485 "trsvcid": "4420" 00:22:20.485 }, 00:22:20.485 "peer_address": { 00:22:20.485 "trtype": "TCP", 00:22:20.485 "adrfam": "IPv4", 00:22:20.485 "traddr": "10.0.0.1", 00:22:20.485 "trsvcid": "45318" 00:22:20.485 }, 00:22:20.485 "auth": { 00:22:20.485 "state": "completed", 00:22:20.485 "digest": "sha512", 00:22:20.485 "dhgroup": "ffdhe8192" 00:22:20.485 } 00:22:20.485 } 00:22:20.485 ]' 00:22:20.485 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.485 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.485 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.485 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:20.485 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.485 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.485 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.485 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.742 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTA5MDhmNDc1NGUyZmRiY2VhODQwNmE1NzA3OWM5M2I1MjZjYTgzMDhmNWYxOWUyvcFDBg==: --dhchap-ctrl-secret DHHC-1:03:NTZmYjYyOGFiMGYyMmM3MDhhNmI5MjUwYWNjMTNlNGE1OWFlN2JiMmY4NGEwODI4ZWFmM2NjMWQ0MGQ1N2Q1OI82XZ8=: 00:22:21.688 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.688 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.688 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.688 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.688 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.688 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:21.688 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.688 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.953 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.953 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:21.953 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:21.953 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:21.953 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:21.953 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.953 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:21.953 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.953 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:21.953 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:22.552 request: 00:22:22.552 { 00:22:22.552 "name": "nvme0", 00:22:22.552 "trtype": "tcp", 00:22:22.552 "traddr": "10.0.0.2", 00:22:22.552 "adrfam": "ipv4", 00:22:22.552 "trsvcid": "4420", 00:22:22.552 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:22.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:22.552 "prchk_reftag": false, 00:22:22.552 "prchk_guard": false, 00:22:22.552 "hdgst": false, 00:22:22.552 "ddgst": false, 00:22:22.552 "dhchap_key": "key2", 00:22:22.552 "method": "bdev_nvme_attach_controller", 00:22:22.552 "req_id": 1 00:22:22.552 } 00:22:22.552 Got JSON-RPC error response 00:22:22.552 response: 00:22:22.552 { 00:22:22.552 "code": -5, 00:22:22.552 "message": "Input/output error" 00:22:22.552 } 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:22.552 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:22.553 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:22.810 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.810 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:22.810 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.810 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:22.810 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.745 request: 00:22:23.745 { 00:22:23.745 "name": "nvme0", 00:22:23.745 "trtype": "tcp", 00:22:23.745 "traddr": "10.0.0.2", 00:22:23.745 "adrfam": "ipv4", 00:22:23.745 "trsvcid": "4420", 00:22:23.745 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:23.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:23.745 "prchk_reftag": false, 00:22:23.745 "prchk_guard": false, 00:22:23.745 "hdgst": false, 00:22:23.745 "ddgst": false, 00:22:23.745 "dhchap_key": "key1", 00:22:23.745 "dhchap_ctrlr_key": "ckey2", 00:22:23.745 "method": "bdev_nvme_attach_controller", 00:22:23.745 "req_id": 1 00:22:23.745 } 00:22:23.745 Got JSON-RPC error response 00:22:23.745 response: 00:22:23.745 { 00:22:23.745 "code": -5, 00:22:23.745 "message": "Input/output error" 00:22:23.745 } 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.745 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:23.746 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.746 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:23.746 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.746 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:23.746 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.746 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.746 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.317 request: 00:22:24.317 { 00:22:24.317 "name": "nvme0", 00:22:24.317 "trtype": "tcp", 00:22:24.317 "traddr": "10.0.0.2", 00:22:24.317 "adrfam": "ipv4", 00:22:24.317 "trsvcid": "4420", 00:22:24.317 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:24.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:24.317 "prchk_reftag": false, 00:22:24.317 "prchk_guard": false, 00:22:24.317 "hdgst": false, 00:22:24.317 "ddgst": false, 00:22:24.317 "dhchap_key": "key1", 00:22:24.317 "dhchap_ctrlr_key": "ckey1", 00:22:24.317 "method": "bdev_nvme_attach_controller", 00:22:24.317 "req_id": 1 00:22:24.317 } 00:22:24.317 Got JSON-RPC error response 00:22:24.317 response: 00:22:24.317 { 00:22:24.317 "code": -5, 00:22:24.317 "message": "Input/output error" 00:22:24.317 } 00:22:24.317 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:24.317 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:24.317 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:24.317 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:24.317 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.317 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.317 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.317 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.317 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 4060538 00:22:24.317 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4060538 ']' 00:22:24.317 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4060538 00:22:24.576 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:24.576 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.576 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4060538 00:22:24.576 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:24.576 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:24.576 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4060538' 00:22:24.576 killing process with pid 4060538 00:22:24.576 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4060538 00:22:24.576 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4060538 00:22:24.834 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:24.834 20:28:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:24.834 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:24.834 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.834 20:28:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4082991 00:22:24.834 20:28:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:24.834 20:28:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4082991 00:22:24.834 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4082991 ']' 00:22:24.834 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.834 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:24.834 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.834 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:24.834 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 4082991 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4082991 ']' 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.092 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.348 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.348 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:25.348 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:25.348 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.348 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.348 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.349 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:25.349 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.349 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:25.349 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:25.349 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:25.349 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.349 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:25.349 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.349 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.349 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.349 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:25.349 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.329 00:22:26.329 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:26.329 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.329 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:26.586 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.586 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.586 20:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.586 20:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.586 20:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.586 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.586 { 00:22:26.586 "cntlid": 1, 00:22:26.586 "qid": 0, 00:22:26.586 "state": "enabled", 00:22:26.586 "thread": "nvmf_tgt_poll_group_000", 00:22:26.586 "listen_address": { 00:22:26.586 "trtype": "TCP", 00:22:26.586 "adrfam": "IPv4", 00:22:26.586 "traddr": "10.0.0.2", 00:22:26.586 "trsvcid": "4420" 00:22:26.586 }, 00:22:26.586 "peer_address": { 00:22:26.586 "trtype": "TCP", 00:22:26.586 "adrfam": "IPv4", 00:22:26.586 "traddr": "10.0.0.1", 00:22:26.586 "trsvcid": "46056" 00:22:26.586 }, 00:22:26.586 "auth": { 00:22:26.586 "state": "completed", 00:22:26.586 "digest": "sha512", 00:22:26.586 "dhgroup": "ffdhe8192" 00:22:26.586 } 00:22:26.586 } 00:22:26.586 ]' 00:22:26.586 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:26.586 20:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.586 20:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:26.586 20:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:26.586 20:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:26.586 20:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.586 20:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.586 20:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.845 20:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTU5NDczOWNkMzcxZTA3YmVmODVkYTNhMDE2MzAwMmM0ZWQzZGIxYjBhMTQwMzI0OGJiMTdiZDQ0YzdhYTQyNUUEHU0=: 00:22:27.777 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.035 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.035 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.035 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.035 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.035 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:28.035 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.035 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.035 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.035 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:28.035 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:28.293 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.293 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:28.293 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.293 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:28.293 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.293 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:28.293 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.293 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.293 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.553 request: 00:22:28.553 { 00:22:28.553 "name": "nvme0", 00:22:28.553 "trtype": "tcp", 00:22:28.553 "traddr": "10.0.0.2", 00:22:28.553 "adrfam": "ipv4", 00:22:28.553 "trsvcid": "4420", 00:22:28.553 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:28.553 "prchk_reftag": false, 00:22:28.553 "prchk_guard": false, 00:22:28.553 "hdgst": false, 00:22:28.553 "ddgst": false, 00:22:28.553 "dhchap_key": "key3", 00:22:28.553 "method": "bdev_nvme_attach_controller", 00:22:28.553 "req_id": 1 00:22:28.553 } 00:22:28.553 Got JSON-RPC error response 00:22:28.553 response: 00:22:28.553 { 00:22:28.553 "code": -5, 00:22:28.553 "message": "Input/output error" 00:22:28.553 } 00:22:28.553 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:28.553 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:28.553 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:28.553 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:28.553 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:28.553 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:28.553 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:28.553 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:28.813 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.813 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:28.813 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.813 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:28.813 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.813 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:28.813 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.813 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.813 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.813 request: 00:22:28.813 { 00:22:28.813 "name": "nvme0", 00:22:28.813 "trtype": "tcp", 00:22:28.813 "traddr": "10.0.0.2", 00:22:28.813 "adrfam": "ipv4", 00:22:28.813 "trsvcid": "4420", 00:22:28.813 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:28.813 "prchk_reftag": false, 00:22:28.813 "prchk_guard": false, 00:22:28.813 "hdgst": false, 00:22:28.813 "ddgst": false, 00:22:28.813 "dhchap_key": "key3", 00:22:28.813 "method": "bdev_nvme_attach_controller", 00:22:28.813 "req_id": 1 00:22:28.813 } 00:22:28.813 Got JSON-RPC error response 00:22:28.813 response: 00:22:28.813 { 00:22:28.813 "code": -5, 00:22:28.813 "message": "Input/output error" 00:22:28.813 } 00:22:29.073 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:29.073 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.073 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.073 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.073 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:29.073 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:29.073 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:29.073 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.073 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.073 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.073 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.073 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.073 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:29.332 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:29.590 request: 00:22:29.590 { 00:22:29.590 "name": "nvme0", 00:22:29.591 "trtype": "tcp", 00:22:29.591 "traddr": "10.0.0.2", 00:22:29.591 "adrfam": "ipv4", 00:22:29.591 "trsvcid": "4420", 00:22:29.591 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:29.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.591 "prchk_reftag": false, 00:22:29.591 "prchk_guard": false, 00:22:29.591 "hdgst": false, 00:22:29.591 "ddgst": false, 00:22:29.591 "dhchap_key": "key0", 00:22:29.591 "dhchap_ctrlr_key": "key1", 00:22:29.591 "method": "bdev_nvme_attach_controller", 00:22:29.591 "req_id": 1 00:22:29.591 } 00:22:29.591 Got JSON-RPC error response 00:22:29.591 response: 00:22:29.591 { 00:22:29.591 "code": -5, 00:22:29.591 "message": "Input/output error" 00:22:29.591 } 00:22:29.591 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:29.591 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.591 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.591 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.591 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:29.591 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:29.849 00:22:29.849 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:29.849 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:29.849 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.106 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.106 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.106 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.365 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:30.365 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:30.365 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4060619 00:22:30.365 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4060619 ']' 00:22:30.365 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4060619 00:22:30.365 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:30.365 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:30.365 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4060619 00:22:30.365 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:30.365 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:30.365 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4060619' 00:22:30.365 killing process with pid 4060619 00:22:30.365 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4060619 00:22:30.365 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4060619 00:22:30.623 20:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:30.623 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:30.623 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:30.623 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:30.623 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:30.623 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:30.623 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:30.623 rmmod nvme_tcp 00:22:30.623 rmmod nvme_fabrics 00:22:30.883 rmmod nvme_keyring 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 4082991 ']' 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 4082991 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4082991 ']' 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4082991 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4082991 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4082991' 00:22:30.883 killing process with pid 4082991 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4082991 00:22:30.883 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4082991 00:22:31.144 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:31.144 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:31.144 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:31.144 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.144 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.144 20:28:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.144 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.144 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.050 20:28:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:33.050 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.8Bb /tmp/spdk.key-sha256.CJa /tmp/spdk.key-sha384.nlz /tmp/spdk.key-sha512.RMq /tmp/spdk.key-sha512.Alc /tmp/spdk.key-sha384.ApJ /tmp/spdk.key-sha256.oOs '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:33.050 00:22:33.050 real 3m9.042s 00:22:33.050 user 7m19.690s 00:22:33.050 sys 0m24.888s 00:22:33.050 20:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:33.050 20:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.050 ************************************ 00:22:33.050 END TEST nvmf_auth_target 00:22:33.050 ************************************ 00:22:33.050 20:28:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:33.050 20:28:11 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:33.050 20:28:11 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:33.050 20:28:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:33.050 20:28:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.050 20:28:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.050 ************************************ 00:22:33.050 START TEST nvmf_bdevio_no_huge 00:22:33.050 ************************************ 00:22:33.050 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:33.309 * Looking for test storage... 00:22:33.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:33.309 20:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:35.214 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:35.214 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:35.214 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:35.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:35.215 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.215 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:35.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:22:35.472 00:22:35.472 --- 10.0.0.2 ping statistics --- 00:22:35.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.472 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:22:35.472 00:22:35.472 --- 10.0.0.1 ping statistics --- 00:22:35.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.472 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=4085750 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 4085750 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 4085750 ']' 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.472 20:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.472 [2024-07-15 20:28:13.837252] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:22:35.472 [2024-07-15 20:28:13.837327] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:35.472 [2024-07-15 20:28:13.903926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.472 [2024-07-15 20:28:13.982091] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.472 [2024-07-15 20:28:13.982145] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.472 [2024-07-15 20:28:13.982158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.472 [2024-07-15 20:28:13.982169] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.472 [2024-07-15 20:28:13.982183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.472 [2024-07-15 20:28:13.982270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:35.472 [2024-07-15 20:28:13.982336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:35.472 [2024-07-15 20:28:13.982386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:35.472 [2024-07-15 20:28:13.982751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.731 [2024-07-15 20:28:14.119817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.731 Malloc0 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.731 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.732 [2024-07-15 20:28:14.158345] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.732 { 00:22:35.732 "params": { 00:22:35.732 "name": "Nvme$subsystem", 00:22:35.732 "trtype": "$TEST_TRANSPORT", 00:22:35.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.732 "adrfam": "ipv4", 00:22:35.732 "trsvcid": "$NVMF_PORT", 00:22:35.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.732 "hdgst": ${hdgst:-false}, 00:22:35.732 "ddgst": ${ddgst:-false} 00:22:35.732 }, 00:22:35.732 "method": "bdev_nvme_attach_controller" 00:22:35.732 } 00:22:35.732 EOF 00:22:35.732 )") 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:35.732 20:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:35.732 "params": { 00:22:35.732 "name": "Nvme1", 00:22:35.732 "trtype": "tcp", 00:22:35.732 "traddr": "10.0.0.2", 00:22:35.732 "adrfam": "ipv4", 00:22:35.732 "trsvcid": "4420", 00:22:35.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:35.732 "hdgst": false, 00:22:35.732 "ddgst": false 00:22:35.732 }, 00:22:35.732 "method": "bdev_nvme_attach_controller" 00:22:35.732 }' 00:22:35.732 [2024-07-15 20:28:14.204140] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:22:35.732 [2024-07-15 20:28:14.204246] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4085781 ] 00:22:35.990 [2024-07-15 20:28:14.262793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:35.990 [2024-07-15 20:28:14.350271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.990 [2024-07-15 20:28:14.350324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.990 [2024-07-15 20:28:14.350327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.250 I/O targets: 00:22:36.250 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:36.250 00:22:36.250 00:22:36.250 CUnit - A unit testing framework for C - Version 2.1-3 00:22:36.250 http://cunit.sourceforge.net/ 00:22:36.250 00:22:36.250 00:22:36.250 Suite: bdevio tests on: Nvme1n1 00:22:36.250 Test: blockdev write read block ...passed 00:22:36.250 Test: blockdev write zeroes read block ...passed 00:22:36.250 Test: blockdev write zeroes read no split ...passed 00:22:36.250 Test: blockdev write zeroes read split ...passed 00:22:36.250 Test: blockdev write zeroes read split partial ...passed 00:22:36.250 Test: blockdev reset ...[2024-07-15 20:28:14.765509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:36.250 [2024-07-15 20:28:14.765626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4f4e0 (9): Bad file descriptor 00:22:36.510 [2024-07-15 20:28:14.782361] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:36.510 passed 00:22:36.510 Test: blockdev write read 8 blocks ...passed 00:22:36.510 Test: blockdev write read size > 128k ...passed 00:22:36.510 Test: blockdev write read invalid size ...passed 00:22:36.510 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:36.510 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:36.510 Test: blockdev write read max offset ...passed 00:22:36.510 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:36.510 Test: blockdev writev readv 8 blocks ...passed 00:22:36.510 Test: blockdev writev readv 30 x 1block ...passed 00:22:36.510 Test: blockdev writev readv block ...passed 00:22:36.510 Test: blockdev writev readv size > 128k ...passed 00:22:36.511 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:36.511 Test: blockdev comparev and writev ...[2024-07-15 20:28:15.002326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.511 [2024-07-15 20:28:15.002362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 20:28:15.002386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.511 [2024-07-15 20:28:15.002404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 20:28:15.002828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.511 [2024-07-15 20:28:15.002852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 20:28:15.002886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.511 [2024-07-15 20:28:15.002906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 20:28:15.003323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.511 [2024-07-15 20:28:15.003349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 20:28:15.003371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.511 [2024-07-15 20:28:15.003388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 20:28:15.003798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.511 [2024-07-15 20:28:15.003823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 20:28:15.003845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.511 [2024-07-15 20:28:15.003862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:36.769 passed 00:22:36.769 Test: blockdev nvme passthru rw ...passed 00:22:36.769 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:28:15.086297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:36.769 [2024-07-15 20:28:15.086326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:36.769 [2024-07-15 20:28:15.086559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:36.769 [2024-07-15 20:28:15.086582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:36.769 [2024-07-15 20:28:15.086808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:36.769 [2024-07-15 20:28:15.086831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:36.769 [2024-07-15 20:28:15.087071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:36.769 [2024-07-15 20:28:15.087097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:36.769 passed 00:22:36.769 Test: blockdev nvme admin passthru ...passed 00:22:36.769 Test: blockdev copy ...passed 00:22:36.769 00:22:36.769 Run Summary: Type Total Ran Passed Failed Inactive 00:22:36.769 suites 1 1 n/a 0 0 00:22:36.769 tests 23 23 23 0 0 00:22:36.769 asserts 152 152 152 0 n/a 00:22:36.769 00:22:36.769 Elapsed time = 1.026 seconds 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:37.027 rmmod nvme_tcp 00:22:37.027 rmmod nvme_fabrics 00:22:37.027 rmmod nvme_keyring 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 4085750 ']' 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 4085750 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 4085750 ']' 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 4085750 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4085750 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4085750' 00:22:37.027 killing process with pid 4085750 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 4085750 00:22:37.027 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 4085750 00:22:37.624 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:37.624 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:37.624 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:37.624 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:37.624 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:37.624 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.624 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.624 20:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.537 20:28:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:39.537 00:22:39.537 real 0m6.383s 00:22:39.537 user 0m10.258s 00:22:39.537 sys 0m2.465s 00:22:39.537 20:28:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:39.537 20:28:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.537 ************************************ 00:22:39.537 END TEST nvmf_bdevio_no_huge 00:22:39.537 ************************************ 00:22:39.537 20:28:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:39.537 20:28:17 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:39.537 20:28:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:39.537 20:28:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.537 20:28:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:39.537 ************************************ 00:22:39.537 START TEST nvmf_tls 00:22:39.537 ************************************ 00:22:39.537 20:28:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:39.537 * Looking for test storage... 00:22:39.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:39.537 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:39.538 20:28:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:41.444 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:41.444 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:41.444 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:41.444 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:41.444 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.445 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.445 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:41.445 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:41.445 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.445 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.704 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.704 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.704 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:41.704 20:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:41.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:22:41.704 00:22:41.704 --- 10.0.0.2 ping statistics --- 00:22:41.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.704 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:22:41.704 00:22:41.704 --- 10.0.0.1 ping statistics --- 00:22:41.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.704 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4087846 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4087846 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4087846 ']' 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.704 20:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.704 [2024-07-15 20:28:20.137639] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:22:41.704 [2024-07-15 20:28:20.137739] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.704 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.704 [2024-07-15 20:28:20.213438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.962 [2024-07-15 20:28:20.307949] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.962 [2024-07-15 20:28:20.308006] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.962 [2024-07-15 20:28:20.308019] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.962 [2024-07-15 20:28:20.308031] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.962 [2024-07-15 20:28:20.308055] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.962 [2024-07-15 20:28:20.308082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.962 20:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.962 20:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:41.962 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:41.962 20:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:41.962 20:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.962 20:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.962 20:28:20 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:41.962 20:28:20 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:42.220 true 00:22:42.220 20:28:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.220 20:28:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:42.478 20:28:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:42.478 20:28:20 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:42.478 20:28:20 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:42.737 20:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.737 20:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:42.996 20:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:42.996 20:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:42.996 20:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:43.255 20:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:43.255 20:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:43.515 20:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:43.515 20:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:43.515 20:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:43.515 20:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:43.774 20:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:43.774 20:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:43.774 20:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:44.032 20:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:44.032 20:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:44.290 20:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:44.290 20:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:44.290 20:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:44.548 20:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:44.548 20:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.lyydWFMRuo 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.xU2RvmQn68 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.lyydWFMRuo 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.xU2RvmQn68 00:22:44.806 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:45.063 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:45.627 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.lyydWFMRuo 00:22:45.627 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lyydWFMRuo 00:22:45.627 20:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:45.883 [2024-07-15 20:28:24.211802] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.883 20:28:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:46.139 20:28:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:46.396 [2024-07-15 20:28:24.685012] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:46.396 [2024-07-15 20:28:24.685250] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.396 20:28:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:46.655 malloc0 00:22:46.655 20:28:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:46.969 20:28:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lyydWFMRuo 00:22:46.969 [2024-07-15 20:28:25.410686] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:46.969 20:28:25 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.lyydWFMRuo 00:22:46.969 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.181 Initializing NVMe Controllers 00:22:59.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:59.181 Initialization complete. Launching workers. 00:22:59.181 ======================================================== 00:22:59.181 Latency(us) 00:22:59.181 Device Information : IOPS MiB/s Average min max 00:22:59.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7877.00 30.77 8127.64 1283.94 9125.41 00:22:59.181 ======================================================== 00:22:59.181 Total : 7877.00 30.77 8127.64 1283.94 9125.41 00:22:59.181 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lyydWFMRuo 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lyydWFMRuo' 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4089730 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4089730 /var/tmp/bdevperf.sock 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4089730 ']' 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.181 [2024-07-15 20:28:35.581549] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:22:59.181 [2024-07-15 20:28:35.581622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4089730 ] 00:22:59.181 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.181 [2024-07-15 20:28:35.640522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.181 [2024-07-15 20:28:35.723928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:59.181 20:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lyydWFMRuo 00:22:59.181 [2024-07-15 20:28:36.051283] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.181 [2024-07-15 20:28:36.051405] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:59.181 TLSTESTn1 00:22:59.182 20:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:59.182 Running I/O for 10 seconds... 00:23:09.188 00:23:09.188 Latency(us) 00:23:09.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.188 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:09.188 Verification LBA range: start 0x0 length 0x2000 00:23:09.188 TLSTESTn1 : 10.05 2045.77 7.99 0.00 0.00 62400.52 7912.87 95925.29 00:23:09.188 =================================================================================================================== 00:23:09.188 Total : 2045.77 7.99 0.00 0.00 62400.52 7912.87 95925.29 00:23:09.188 0 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4089730 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4089730 ']' 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4089730 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4089730 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4089730' 00:23:09.188 killing process with pid 4089730 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4089730 00:23:09.188 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.188 00:23:09.188 Latency(us) 00:23:09.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.188 =================================================================================================================== 00:23:09.188 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.188 [2024-07-15 20:28:46.385042] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4089730 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xU2RvmQn68 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xU2RvmQn68 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xU2RvmQn68 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xU2RvmQn68' 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4091015 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4091015 /var/tmp/bdevperf.sock 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4091015 ']' 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.188 [2024-07-15 20:28:46.651554] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:09.188 [2024-07-15 20:28:46.651633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091015 ] 00:23:09.188 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.188 [2024-07-15 20:28:46.708214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.188 [2024-07-15 20:28:46.790365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:09.188 20:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xU2RvmQn68 00:23:09.188 [2024-07-15 20:28:47.165766] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.188 [2024-07-15 20:28:47.165919] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:09.188 [2024-07-15 20:28:47.171194] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:09.188 [2024-07-15 20:28:47.171710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c2ab0 (107): Transport endpoint is not connected 00:23:09.189 [2024-07-15 20:28:47.172696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c2ab0 (9): Bad file descriptor 00:23:09.189 [2024-07-15 20:28:47.173698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.189 [2024-07-15 20:28:47.173717] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:09.189 [2024-07-15 20:28:47.173748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.189 request: 00:23:09.189 { 00:23:09.189 "name": "TLSTEST", 00:23:09.189 "trtype": "tcp", 00:23:09.189 "traddr": "10.0.0.2", 00:23:09.189 "adrfam": "ipv4", 00:23:09.189 "trsvcid": "4420", 00:23:09.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.189 "prchk_reftag": false, 00:23:09.189 "prchk_guard": false, 00:23:09.189 "hdgst": false, 00:23:09.189 "ddgst": false, 00:23:09.189 "psk": "/tmp/tmp.xU2RvmQn68", 00:23:09.189 "method": "bdev_nvme_attach_controller", 00:23:09.189 "req_id": 1 00:23:09.189 } 00:23:09.189 Got JSON-RPC error response 00:23:09.189 response: 00:23:09.189 { 00:23:09.189 "code": -5, 00:23:09.189 "message": "Input/output error" 00:23:09.189 } 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4091015 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4091015 ']' 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4091015 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4091015 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4091015' 00:23:09.189 killing process with pid 4091015 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4091015 00:23:09.189 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.189 00:23:09.189 Latency(us) 00:23:09.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.189 =================================================================================================================== 00:23:09.189 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.189 [2024-07-15 20:28:47.224286] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4091015 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lyydWFMRuo 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lyydWFMRuo 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lyydWFMRuo 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lyydWFMRuo' 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4091070 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4091070 /var/tmp/bdevperf.sock 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4091070 ']' 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.189 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.189 [2024-07-15 20:28:47.474093] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:09.189 [2024-07-15 20:28:47.474192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091070 ] 00:23:09.189 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.189 [2024-07-15 20:28:47.531378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.189 [2024-07-15 20:28:47.616301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.447 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.447 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:09.447 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.lyydWFMRuo 00:23:09.447 [2024-07-15 20:28:47.944364] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.447 [2024-07-15 20:28:47.944485] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:09.447 [2024-07-15 20:28:47.952632] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:09.447 [2024-07-15 20:28:47.952660] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:09.447 [2024-07-15 20:28:47.952711] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:09.447 [2024-07-15 20:28:47.953303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1626ab0 (107): Transport endpoint is not connected 00:23:09.447 [2024-07-15 20:28:47.954293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1626ab0 (9): Bad file descriptor 00:23:09.447 [2024-07-15 20:28:47.955292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.447 [2024-07-15 20:28:47.955310] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:09.448 [2024-07-15 20:28:47.955341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.448 request: 00:23:09.448 { 00:23:09.448 "name": "TLSTEST", 00:23:09.448 "trtype": "tcp", 00:23:09.448 "traddr": "10.0.0.2", 00:23:09.448 "adrfam": "ipv4", 00:23:09.448 "trsvcid": "4420", 00:23:09.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.448 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:09.448 "prchk_reftag": false, 00:23:09.448 "prchk_guard": false, 00:23:09.448 "hdgst": false, 00:23:09.448 "ddgst": false, 00:23:09.448 "psk": "/tmp/tmp.lyydWFMRuo", 00:23:09.448 "method": "bdev_nvme_attach_controller", 00:23:09.448 "req_id": 1 00:23:09.448 } 00:23:09.448 Got JSON-RPC error response 00:23:09.448 response: 00:23:09.448 { 00:23:09.448 "code": -5, 00:23:09.448 "message": "Input/output error" 00:23:09.448 } 00:23:09.448 20:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4091070 00:23:09.448 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4091070 ']' 00:23:09.448 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4091070 00:23:09.448 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:09.448 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.448 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4091070 00:23:09.706 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:09.706 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:09.706 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4091070' 00:23:09.706 killing process with pid 4091070 00:23:09.706 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4091070 00:23:09.706 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.706 00:23:09.706 Latency(us) 00:23:09.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.706 =================================================================================================================== 00:23:09.706 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.706 [2024-07-15 20:28:47.996408] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:09.706 20:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4091070 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lyydWFMRuo 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lyydWFMRuo 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lyydWFMRuo 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lyydWFMRuo' 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4091196 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4091196 /var/tmp/bdevperf.sock 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4091196 ']' 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.706 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.966 [2024-07-15 20:28:48.247447] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:09.966 [2024-07-15 20:28:48.247523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091196 ] 00:23:09.966 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.966 [2024-07-15 20:28:48.304930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.966 [2024-07-15 20:28:48.389638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.966 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.966 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:09.966 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lyydWFMRuo 00:23:10.224 [2024-07-15 20:28:48.709535] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.224 [2024-07-15 20:28:48.709663] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:10.224 [2024-07-15 20:28:48.714781] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:10.224 [2024-07-15 20:28:48.714812] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:10.224 [2024-07-15 20:28:48.714850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:10.224 [2024-07-15 20:28:48.715459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6ab0 (107): Transport endpoint is not connected 00:23:10.224 [2024-07-15 20:28:48.716447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6ab0 (9): Bad file descriptor 00:23:10.224 [2024-07-15 20:28:48.717445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:10.224 [2024-07-15 20:28:48.717464] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:10.224 [2024-07-15 20:28:48.717496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:10.224 request: 00:23:10.224 { 00:23:10.224 "name": "TLSTEST", 00:23:10.224 "trtype": "tcp", 00:23:10.224 "traddr": "10.0.0.2", 00:23:10.224 "adrfam": "ipv4", 00:23:10.224 "trsvcid": "4420", 00:23:10.224 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:10.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.224 "prchk_reftag": false, 00:23:10.224 "prchk_guard": false, 00:23:10.224 "hdgst": false, 00:23:10.224 "ddgst": false, 00:23:10.224 "psk": "/tmp/tmp.lyydWFMRuo", 00:23:10.224 "method": "bdev_nvme_attach_controller", 00:23:10.224 "req_id": 1 00:23:10.224 } 00:23:10.224 Got JSON-RPC error response 00:23:10.224 response: 00:23:10.224 { 00:23:10.224 "code": -5, 00:23:10.224 "message": "Input/output error" 00:23:10.224 } 00:23:10.224 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4091196 00:23:10.224 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4091196 ']' 00:23:10.224 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4091196 00:23:10.224 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:10.224 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.224 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4091196 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4091196' 00:23:10.482 killing process with pid 4091196 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4091196 00:23:10.482 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.482 00:23:10.482 Latency(us) 00:23:10.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.482 =================================================================================================================== 00:23:10.482 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:10.482 [2024-07-15 20:28:48.769567] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4091196 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4091331 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4091331 /var/tmp/bdevperf.sock 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4091331 ']' 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.482 20:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.741 [2024-07-15 20:28:49.021650] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:10.741 [2024-07-15 20:28:49.021728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091331 ] 00:23:10.741 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.741 [2024-07-15 20:28:49.078938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.741 [2024-07-15 20:28:49.161398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.741 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.741 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:10.741 20:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:11.000 [2024-07-15 20:28:49.487073] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:11.000 [2024-07-15 20:28:49.488952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1142e60 (9): Bad file descriptor 00:23:11.000 [2024-07-15 20:28:49.489950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.000 [2024-07-15 20:28:49.489971] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:11.000 [2024-07-15 20:28:49.489988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.000 request: 00:23:11.000 { 00:23:11.000 "name": "TLSTEST", 00:23:11.000 "trtype": "tcp", 00:23:11.000 "traddr": "10.0.0.2", 00:23:11.000 "adrfam": "ipv4", 00:23:11.000 "trsvcid": "4420", 00:23:11.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.000 "prchk_reftag": false, 00:23:11.000 "prchk_guard": false, 00:23:11.000 "hdgst": false, 00:23:11.000 "ddgst": false, 00:23:11.000 "method": "bdev_nvme_attach_controller", 00:23:11.000 "req_id": 1 00:23:11.000 } 00:23:11.000 Got JSON-RPC error response 00:23:11.000 response: 00:23:11.000 { 00:23:11.000 "code": -5, 00:23:11.000 "message": "Input/output error" 00:23:11.000 } 00:23:11.000 20:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4091331 00:23:11.000 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4091331 ']' 00:23:11.000 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4091331 00:23:11.000 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:11.000 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:11.000 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4091331 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4091331' 00:23:11.258 killing process with pid 4091331 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4091331 00:23:11.258 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.258 00:23:11.258 Latency(us) 00:23:11.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.258 =================================================================================================================== 00:23:11.258 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4091331 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 4087846 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4087846 ']' 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4087846 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4087846 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:11.258 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:11.259 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4087846' 00:23:11.259 killing process with pid 4087846 00:23:11.259 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4087846 00:23:11.259 [2024-07-15 20:28:49.750797] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:11.259 20:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4087846 00:23:11.516 20:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:11.516 20:28:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:11.516 20:28:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.516 20:28:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:11.516 20:28:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:11.516 20:28:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:11.516 20:28:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:11.516 20:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:11.516 20:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:11.516 20:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.XMWurxuAWE 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.XMWurxuAWE 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4091483 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4091483 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4091483 ']' 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.775 20:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.775 [2024-07-15 20:28:50.098991] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:11.775 [2024-07-15 20:28:50.099083] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.775 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.775 [2024-07-15 20:28:50.162608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.775 [2024-07-15 20:28:50.252224] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.775 [2024-07-15 20:28:50.252284] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.775 [2024-07-15 20:28:50.252313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.775 [2024-07-15 20:28:50.252324] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.775 [2024-07-15 20:28:50.252334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.775 [2024-07-15 20:28:50.252362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.033 20:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.033 20:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:12.033 20:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.033 20:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:12.033 20:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.033 20:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.033 20:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.XMWurxuAWE 00:23:12.033 20:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XMWurxuAWE 00:23:12.033 20:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:12.291 [2024-07-15 20:28:50.646961] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.291 20:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:12.548 20:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:12.806 [2024-07-15 20:28:51.144329] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:12.806 [2024-07-15 20:28:51.144565] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.806 20:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:13.064 malloc0 00:23:13.064 20:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:13.323 20:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XMWurxuAWE 00:23:13.582 [2024-07-15 20:28:51.972951] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XMWurxuAWE 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XMWurxuAWE' 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4091650 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4091650 /var/tmp/bdevperf.sock 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4091650 ']' 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.582 20:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.582 [2024-07-15 20:28:52.033431] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:13.582 [2024-07-15 20:28:52.033502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091650 ] 00:23:13.582 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.582 [2024-07-15 20:28:52.089759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.840 [2024-07-15 20:28:52.173661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.840 20:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.840 20:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:13.840 20:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XMWurxuAWE 00:23:14.098 [2024-07-15 20:28:52.505751] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.098 [2024-07-15 20:28:52.505892] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:14.098 TLSTESTn1 00:23:14.098 20:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:14.355 Running I/O for 10 seconds... 00:23:24.334 00:23:24.334 Latency(us) 00:23:24.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.334 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:24.334 Verification LBA range: start 0x0 length 0x2000 00:23:24.334 TLSTESTn1 : 10.06 2110.22 8.24 0.00 0.00 60479.89 11942.12 97478.73 00:23:24.334 =================================================================================================================== 00:23:24.334 Total : 2110.22 8.24 0.00 0.00 60479.89 11942.12 97478.73 00:23:24.334 0 00:23:24.334 20:29:02 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.334 20:29:02 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4091650 00:23:24.334 20:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4091650 ']' 00:23:24.334 20:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4091650 00:23:24.334 20:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:24.334 20:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.334 20:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4091650 00:23:24.334 20:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:24.334 20:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:24.334 20:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4091650' 00:23:24.334 killing process with pid 4091650 00:23:24.334 20:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4091650 00:23:24.334 Received shutdown signal, test time was about 10.000000 seconds 00:23:24.334 00:23:24.334 Latency(us) 00:23:24.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.335 =================================================================================================================== 00:23:24.335 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:24.335 [2024-07-15 20:29:02.820667] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:24.335 20:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4091650 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.XMWurxuAWE 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XMWurxuAWE 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XMWurxuAWE 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XMWurxuAWE 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XMWurxuAWE' 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4092960 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4092960 /var/tmp/bdevperf.sock 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4092960 ']' 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.593 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.593 [2024-07-15 20:29:03.097048] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:24.593 [2024-07-15 20:29:03.097129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092960 ] 00:23:24.852 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.852 [2024-07-15 20:29:03.154944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.852 [2024-07-15 20:29:03.236739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.852 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.852 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:24.852 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XMWurxuAWE 00:23:25.110 [2024-07-15 20:29:03.564947] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:25.110 [2024-07-15 20:29:03.565031] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:25.110 [2024-07-15 20:29:03.565046] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.XMWurxuAWE 00:23:25.110 request: 00:23:25.110 { 00:23:25.110 "name": "TLSTEST", 00:23:25.110 "trtype": "tcp", 00:23:25.110 "traddr": "10.0.0.2", 00:23:25.110 "adrfam": "ipv4", 00:23:25.110 "trsvcid": "4420", 00:23:25.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.110 "prchk_reftag": false, 00:23:25.110 "prchk_guard": false, 00:23:25.110 "hdgst": false, 00:23:25.110 "ddgst": false, 00:23:25.110 "psk": "/tmp/tmp.XMWurxuAWE", 00:23:25.110 "method": "bdev_nvme_attach_controller", 00:23:25.110 "req_id": 1 00:23:25.110 } 00:23:25.110 Got JSON-RPC error response 00:23:25.110 response: 00:23:25.110 { 00:23:25.110 "code": -1, 00:23:25.110 "message": "Operation not permitted" 00:23:25.110 } 00:23:25.110 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4092960 00:23:25.110 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4092960 ']' 00:23:25.110 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4092960 00:23:25.110 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:25.110 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:25.111 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4092960 00:23:25.111 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:25.111 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:25.111 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4092960' 00:23:25.111 killing process with pid 4092960 00:23:25.111 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4092960 00:23:25.111 Received shutdown signal, test time was about 10.000000 seconds 00:23:25.111 00:23:25.111 Latency(us) 00:23:25.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.111 =================================================================================================================== 00:23:25.111 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:25.111 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4092960 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 4091483 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4091483 ']' 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4091483 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4091483 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4091483' 00:23:25.370 killing process with pid 4091483 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4091483 00:23:25.370 [2024-07-15 20:29:03.829553] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:25.370 20:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4091483 00:23:25.629 20:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:25.629 20:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:25.629 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.629 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.629 20:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4093105 00:23:25.629 20:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:25.629 20:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4093105 00:23:25.629 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4093105 ']' 00:23:25.629 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.629 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.629 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.629 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.629 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.629 [2024-07-15 20:29:04.120801] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:25.629 [2024-07-15 20:29:04.120894] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.629 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.887 [2024-07-15 20:29:04.189030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.887 [2024-07-15 20:29:04.276544] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.887 [2024-07-15 20:29:04.276608] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.887 [2024-07-15 20:29:04.276624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.887 [2024-07-15 20:29:04.276637] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.887 [2024-07-15 20:29:04.276649] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.887 [2024-07-15 20:29:04.276680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.887 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.887 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:25.887 20:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:25.887 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:25.887 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.146 20:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.146 20:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.XMWurxuAWE 00:23:26.146 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:26.146 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.XMWurxuAWE 00:23:26.146 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:26.146 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:26.146 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:26.146 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:26.146 20:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.XMWurxuAWE 00:23:26.146 20:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XMWurxuAWE 00:23:26.146 20:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:26.406 [2024-07-15 20:29:04.688967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.406 20:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:26.665 20:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:26.665 [2024-07-15 20:29:05.182261] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:26.665 [2024-07-15 20:29:05.182516] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.924 20:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:26.924 malloc0 00:23:26.924 20:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XMWurxuAWE 00:23:27.494 [2024-07-15 20:29:05.944079] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:27.494 [2024-07-15 20:29:05.944121] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:27.494 [2024-07-15 20:29:05.944156] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:27.494 request: 00:23:27.494 { 00:23:27.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.494 "host": "nqn.2016-06.io.spdk:host1", 00:23:27.494 "psk": "/tmp/tmp.XMWurxuAWE", 00:23:27.494 "method": "nvmf_subsystem_add_host", 00:23:27.494 "req_id": 1 00:23:27.494 } 00:23:27.494 Got JSON-RPC error response 00:23:27.494 response: 00:23:27.494 { 00:23:27.494 "code": -32603, 00:23:27.494 "message": "Internal error" 00:23:27.494 } 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 4093105 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4093105 ']' 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4093105 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4093105 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4093105' 00:23:27.494 killing process with pid 4093105 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4093105 00:23:27.494 20:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4093105 00:23:27.752 20:29:06 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.XMWurxuAWE 00:23:27.752 20:29:06 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:27.752 20:29:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.752 20:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:27.752 20:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.752 20:29:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4093398 00:23:27.752 20:29:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:27.752 20:29:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4093398 00:23:27.752 20:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4093398 ']' 00:23:27.752 20:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.753 20:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.753 20:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.753 20:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.753 20:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.013 [2024-07-15 20:29:06.306927] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:28.013 [2024-07-15 20:29:06.307010] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.013 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.013 [2024-07-15 20:29:06.377196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.013 [2024-07-15 20:29:06.463037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.013 [2024-07-15 20:29:06.463100] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.013 [2024-07-15 20:29:06.463117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.013 [2024-07-15 20:29:06.463131] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.013 [2024-07-15 20:29:06.463142] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.013 [2024-07-15 20:29:06.463185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.271 20:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.271 20:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:28.271 20:29:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.271 20:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:28.271 20:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.271 20:29:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.271 20:29:06 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.XMWurxuAWE 00:23:28.271 20:29:06 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XMWurxuAWE 00:23:28.271 20:29:06 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:28.529 [2024-07-15 20:29:06.862884] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.529 20:29:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:28.785 20:29:07 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:29.073 [2024-07-15 20:29:07.444467] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.073 [2024-07-15 20:29:07.444718] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.073 20:29:07 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:29.362 malloc0 00:23:29.362 20:29:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:29.619 20:29:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XMWurxuAWE 00:23:29.876 [2024-07-15 20:29:08.334764] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:29.876 20:29:08 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=4093685 00:23:29.876 20:29:08 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:29.876 20:29:08 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.876 20:29:08 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 4093685 /var/tmp/bdevperf.sock 00:23:29.876 20:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4093685 ']' 00:23:29.877 20:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.877 20:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.877 20:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.877 20:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.877 20:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.877 [2024-07-15 20:29:08.397701] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:29.877 [2024-07-15 20:29:08.397769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4093685 ] 00:23:30.134 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.134 [2024-07-15 20:29:08.454583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.134 [2024-07-15 20:29:08.539281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.134 20:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.134 20:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:30.134 20:29:08 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XMWurxuAWE 00:23:30.406 [2024-07-15 20:29:08.861010] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.407 [2024-07-15 20:29:08.861150] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:30.664 TLSTESTn1 00:23:30.664 20:29:08 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:30.922 20:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:30.922 "subsystems": [ 00:23:30.922 { 00:23:30.922 "subsystem": "keyring", 00:23:30.922 "config": [] 00:23:30.922 }, 00:23:30.922 { 00:23:30.922 "subsystem": "iobuf", 00:23:30.922 "config": [ 00:23:30.922 { 00:23:30.922 "method": "iobuf_set_options", 00:23:30.922 "params": { 00:23:30.922 "small_pool_count": 8192, 00:23:30.922 "large_pool_count": 1024, 00:23:30.922 "small_bufsize": 8192, 00:23:30.922 "large_bufsize": 135168 00:23:30.923 } 00:23:30.923 } 00:23:30.923 ] 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "subsystem": "sock", 00:23:30.923 "config": [ 00:23:30.923 { 00:23:30.923 "method": "sock_set_default_impl", 00:23:30.923 "params": { 00:23:30.923 "impl_name": "posix" 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "sock_impl_set_options", 00:23:30.923 "params": { 00:23:30.923 "impl_name": "ssl", 00:23:30.923 "recv_buf_size": 4096, 00:23:30.923 "send_buf_size": 4096, 00:23:30.923 "enable_recv_pipe": true, 00:23:30.923 "enable_quickack": false, 00:23:30.923 "enable_placement_id": 0, 00:23:30.923 "enable_zerocopy_send_server": true, 00:23:30.923 "enable_zerocopy_send_client": false, 00:23:30.923 "zerocopy_threshold": 0, 00:23:30.923 "tls_version": 0, 00:23:30.923 "enable_ktls": false 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "sock_impl_set_options", 00:23:30.923 "params": { 00:23:30.923 "impl_name": "posix", 00:23:30.923 "recv_buf_size": 2097152, 00:23:30.923 "send_buf_size": 2097152, 00:23:30.923 "enable_recv_pipe": true, 00:23:30.923 "enable_quickack": false, 00:23:30.923 "enable_placement_id": 0, 00:23:30.923 "enable_zerocopy_send_server": true, 00:23:30.923 "enable_zerocopy_send_client": false, 00:23:30.923 "zerocopy_threshold": 0, 00:23:30.923 "tls_version": 0, 00:23:30.923 "enable_ktls": false 00:23:30.923 } 00:23:30.923 } 00:23:30.923 ] 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "subsystem": "vmd", 00:23:30.923 "config": [] 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "subsystem": "accel", 00:23:30.923 "config": [ 00:23:30.923 { 00:23:30.923 "method": "accel_set_options", 00:23:30.923 "params": { 00:23:30.923 "small_cache_size": 128, 00:23:30.923 "large_cache_size": 16, 00:23:30.923 "task_count": 2048, 00:23:30.923 "sequence_count": 2048, 00:23:30.923 "buf_count": 2048 00:23:30.923 } 00:23:30.923 } 00:23:30.923 ] 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "subsystem": "bdev", 00:23:30.923 "config": [ 00:23:30.923 { 00:23:30.923 "method": "bdev_set_options", 00:23:30.923 "params": { 00:23:30.923 "bdev_io_pool_size": 65535, 00:23:30.923 "bdev_io_cache_size": 256, 00:23:30.923 "bdev_auto_examine": true, 00:23:30.923 "iobuf_small_cache_size": 128, 00:23:30.923 "iobuf_large_cache_size": 16 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "bdev_raid_set_options", 00:23:30.923 "params": { 00:23:30.923 "process_window_size_kb": 1024 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "bdev_iscsi_set_options", 00:23:30.923 "params": { 00:23:30.923 "timeout_sec": 30 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "bdev_nvme_set_options", 00:23:30.923 "params": { 00:23:30.923 "action_on_timeout": "none", 00:23:30.923 "timeout_us": 0, 00:23:30.923 "timeout_admin_us": 0, 00:23:30.923 "keep_alive_timeout_ms": 10000, 00:23:30.923 "arbitration_burst": 0, 00:23:30.923 "low_priority_weight": 0, 00:23:30.923 "medium_priority_weight": 0, 00:23:30.923 "high_priority_weight": 0, 00:23:30.923 "nvme_adminq_poll_period_us": 10000, 00:23:30.923 "nvme_ioq_poll_period_us": 0, 00:23:30.923 "io_queue_requests": 0, 00:23:30.923 "delay_cmd_submit": true, 00:23:30.923 "transport_retry_count": 4, 00:23:30.923 "bdev_retry_count": 3, 00:23:30.923 "transport_ack_timeout": 0, 00:23:30.923 "ctrlr_loss_timeout_sec": 0, 00:23:30.923 "reconnect_delay_sec": 0, 00:23:30.923 "fast_io_fail_timeout_sec": 0, 00:23:30.923 "disable_auto_failback": false, 00:23:30.923 "generate_uuids": false, 00:23:30.923 "transport_tos": 0, 00:23:30.923 "nvme_error_stat": false, 00:23:30.923 "rdma_srq_size": 0, 00:23:30.923 "io_path_stat": false, 00:23:30.923 "allow_accel_sequence": false, 00:23:30.923 "rdma_max_cq_size": 0, 00:23:30.923 "rdma_cm_event_timeout_ms": 0, 00:23:30.923 "dhchap_digests": [ 00:23:30.923 "sha256", 00:23:30.923 "sha384", 00:23:30.923 "sha512" 00:23:30.923 ], 00:23:30.923 "dhchap_dhgroups": [ 00:23:30.923 "null", 00:23:30.923 "ffdhe2048", 00:23:30.923 "ffdhe3072", 00:23:30.923 "ffdhe4096", 00:23:30.923 "ffdhe6144", 00:23:30.923 "ffdhe8192" 00:23:30.923 ] 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "bdev_nvme_set_hotplug", 00:23:30.923 "params": { 00:23:30.923 "period_us": 100000, 00:23:30.923 "enable": false 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "bdev_malloc_create", 00:23:30.923 "params": { 00:23:30.923 "name": "malloc0", 00:23:30.923 "num_blocks": 8192, 00:23:30.923 "block_size": 4096, 00:23:30.923 "physical_block_size": 4096, 00:23:30.923 "uuid": "66795f89-c1a3-4199-9134-8289f01897e5", 00:23:30.923 "optimal_io_boundary": 0 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "bdev_wait_for_examine" 00:23:30.923 } 00:23:30.923 ] 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "subsystem": "nbd", 00:23:30.923 "config": [] 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "subsystem": "scheduler", 00:23:30.923 "config": [ 00:23:30.923 { 00:23:30.923 "method": "framework_set_scheduler", 00:23:30.923 "params": { 00:23:30.923 "name": "static" 00:23:30.923 } 00:23:30.923 } 00:23:30.923 ] 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "subsystem": "nvmf", 00:23:30.923 "config": [ 00:23:30.923 { 00:23:30.923 "method": "nvmf_set_config", 00:23:30.923 "params": { 00:23:30.923 "discovery_filter": "match_any", 00:23:30.923 "admin_cmd_passthru": { 00:23:30.923 "identify_ctrlr": false 00:23:30.923 } 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "nvmf_set_max_subsystems", 00:23:30.923 "params": { 00:23:30.923 "max_subsystems": 1024 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "nvmf_set_crdt", 00:23:30.923 "params": { 00:23:30.923 "crdt1": 0, 00:23:30.923 "crdt2": 0, 00:23:30.923 "crdt3": 0 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "nvmf_create_transport", 00:23:30.923 "params": { 00:23:30.923 "trtype": "TCP", 00:23:30.923 "max_queue_depth": 128, 00:23:30.923 "max_io_qpairs_per_ctrlr": 127, 00:23:30.923 "in_capsule_data_size": 4096, 00:23:30.923 "max_io_size": 131072, 00:23:30.923 "io_unit_size": 131072, 00:23:30.923 "max_aq_depth": 128, 00:23:30.923 "num_shared_buffers": 511, 00:23:30.923 "buf_cache_size": 4294967295, 00:23:30.923 "dif_insert_or_strip": false, 00:23:30.923 "zcopy": false, 00:23:30.923 "c2h_success": false, 00:23:30.923 "sock_priority": 0, 00:23:30.923 "abort_timeout_sec": 1, 00:23:30.923 "ack_timeout": 0, 00:23:30.923 "data_wr_pool_size": 0 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "nvmf_create_subsystem", 00:23:30.923 "params": { 00:23:30.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.923 "allow_any_host": false, 00:23:30.923 "serial_number": "SPDK00000000000001", 00:23:30.923 "model_number": "SPDK bdev Controller", 00:23:30.923 "max_namespaces": 10, 00:23:30.923 "min_cntlid": 1, 00:23:30.923 "max_cntlid": 65519, 00:23:30.923 "ana_reporting": false 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "nvmf_subsystem_add_host", 00:23:30.923 "params": { 00:23:30.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.923 "host": "nqn.2016-06.io.spdk:host1", 00:23:30.923 "psk": "/tmp/tmp.XMWurxuAWE" 00:23:30.923 } 00:23:30.923 }, 00:23:30.923 { 00:23:30.923 "method": "nvmf_subsystem_add_ns", 00:23:30.923 "params": { 00:23:30.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.923 "namespace": { 00:23:30.923 "nsid": 1, 00:23:30.923 "bdev_name": "malloc0", 00:23:30.923 "nguid": "66795F89C1A3419991348289F01897E5", 00:23:30.923 "uuid": "66795f89-c1a3-4199-9134-8289f01897e5", 00:23:30.923 "no_auto_visible": false 00:23:30.923 } 00:23:30.923 } 00:23:30.923 }, 00:23:30.924 { 00:23:30.924 "method": "nvmf_subsystem_add_listener", 00:23:30.924 "params": { 00:23:30.924 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.924 "listen_address": { 00:23:30.924 "trtype": "TCP", 00:23:30.924 "adrfam": "IPv4", 00:23:30.924 "traddr": "10.0.0.2", 00:23:30.924 "trsvcid": "4420" 00:23:30.924 }, 00:23:30.924 "secure_channel": true 00:23:30.924 } 00:23:30.924 } 00:23:30.924 ] 00:23:30.924 } 00:23:30.924 ] 00:23:30.924 }' 00:23:30.924 20:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:31.183 20:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:31.183 "subsystems": [ 00:23:31.183 { 00:23:31.183 "subsystem": "keyring", 00:23:31.183 "config": [] 00:23:31.183 }, 00:23:31.183 { 00:23:31.183 "subsystem": "iobuf", 00:23:31.183 "config": [ 00:23:31.183 { 00:23:31.183 "method": "iobuf_set_options", 00:23:31.183 "params": { 00:23:31.183 "small_pool_count": 8192, 00:23:31.183 "large_pool_count": 1024, 00:23:31.183 "small_bufsize": 8192, 00:23:31.183 "large_bufsize": 135168 00:23:31.183 } 00:23:31.183 } 00:23:31.183 ] 00:23:31.183 }, 00:23:31.183 { 00:23:31.183 "subsystem": "sock", 00:23:31.183 "config": [ 00:23:31.183 { 00:23:31.183 "method": "sock_set_default_impl", 00:23:31.183 "params": { 00:23:31.183 "impl_name": "posix" 00:23:31.183 } 00:23:31.183 }, 00:23:31.183 { 00:23:31.183 "method": "sock_impl_set_options", 00:23:31.183 "params": { 00:23:31.183 "impl_name": "ssl", 00:23:31.183 "recv_buf_size": 4096, 00:23:31.183 "send_buf_size": 4096, 00:23:31.183 "enable_recv_pipe": true, 00:23:31.183 "enable_quickack": false, 00:23:31.183 "enable_placement_id": 0, 00:23:31.183 "enable_zerocopy_send_server": true, 00:23:31.183 "enable_zerocopy_send_client": false, 00:23:31.183 "zerocopy_threshold": 0, 00:23:31.183 "tls_version": 0, 00:23:31.183 "enable_ktls": false 00:23:31.183 } 00:23:31.183 }, 00:23:31.183 { 00:23:31.183 "method": "sock_impl_set_options", 00:23:31.183 "params": { 00:23:31.183 "impl_name": "posix", 00:23:31.183 "recv_buf_size": 2097152, 00:23:31.183 "send_buf_size": 2097152, 00:23:31.183 "enable_recv_pipe": true, 00:23:31.183 "enable_quickack": false, 00:23:31.183 "enable_placement_id": 0, 00:23:31.183 "enable_zerocopy_send_server": true, 00:23:31.183 "enable_zerocopy_send_client": false, 00:23:31.183 "zerocopy_threshold": 0, 00:23:31.183 "tls_version": 0, 00:23:31.183 "enable_ktls": false 00:23:31.183 } 00:23:31.183 } 00:23:31.183 ] 00:23:31.183 }, 00:23:31.183 { 00:23:31.183 "subsystem": "vmd", 00:23:31.183 "config": [] 00:23:31.183 }, 00:23:31.183 { 00:23:31.183 "subsystem": "accel", 00:23:31.184 "config": [ 00:23:31.184 { 00:23:31.184 "method": "accel_set_options", 00:23:31.184 "params": { 00:23:31.184 "small_cache_size": 128, 00:23:31.184 "large_cache_size": 16, 00:23:31.184 "task_count": 2048, 00:23:31.184 "sequence_count": 2048, 00:23:31.184 "buf_count": 2048 00:23:31.184 } 00:23:31.184 } 00:23:31.184 ] 00:23:31.184 }, 00:23:31.184 { 00:23:31.184 "subsystem": "bdev", 00:23:31.184 "config": [ 00:23:31.184 { 00:23:31.184 "method": "bdev_set_options", 00:23:31.184 "params": { 00:23:31.184 "bdev_io_pool_size": 65535, 00:23:31.184 "bdev_io_cache_size": 256, 00:23:31.184 "bdev_auto_examine": true, 00:23:31.184 "iobuf_small_cache_size": 128, 00:23:31.184 "iobuf_large_cache_size": 16 00:23:31.184 } 00:23:31.184 }, 00:23:31.184 { 00:23:31.184 "method": "bdev_raid_set_options", 00:23:31.184 "params": { 00:23:31.184 "process_window_size_kb": 1024 00:23:31.184 } 00:23:31.184 }, 00:23:31.184 { 00:23:31.184 "method": "bdev_iscsi_set_options", 00:23:31.184 "params": { 00:23:31.184 "timeout_sec": 30 00:23:31.184 } 00:23:31.184 }, 00:23:31.184 { 00:23:31.184 "method": "bdev_nvme_set_options", 00:23:31.184 "params": { 00:23:31.184 "action_on_timeout": "none", 00:23:31.184 "timeout_us": 0, 00:23:31.184 "timeout_admin_us": 0, 00:23:31.184 "keep_alive_timeout_ms": 10000, 00:23:31.184 "arbitration_burst": 0, 00:23:31.184 "low_priority_weight": 0, 00:23:31.184 "medium_priority_weight": 0, 00:23:31.184 "high_priority_weight": 0, 00:23:31.184 "nvme_adminq_poll_period_us": 10000, 00:23:31.184 "nvme_ioq_poll_period_us": 0, 00:23:31.184 "io_queue_requests": 512, 00:23:31.184 "delay_cmd_submit": true, 00:23:31.184 "transport_retry_count": 4, 00:23:31.184 "bdev_retry_count": 3, 00:23:31.184 "transport_ack_timeout": 0, 00:23:31.184 "ctrlr_loss_timeout_sec": 0, 00:23:31.184 "reconnect_delay_sec": 0, 00:23:31.184 "fast_io_fail_timeout_sec": 0, 00:23:31.184 "disable_auto_failback": false, 00:23:31.184 "generate_uuids": false, 00:23:31.184 "transport_tos": 0, 00:23:31.184 "nvme_error_stat": false, 00:23:31.184 "rdma_srq_size": 0, 00:23:31.184 "io_path_stat": false, 00:23:31.184 "allow_accel_sequence": false, 00:23:31.184 "rdma_max_cq_size": 0, 00:23:31.184 "rdma_cm_event_timeout_ms": 0, 00:23:31.184 "dhchap_digests": [ 00:23:31.184 "sha256", 00:23:31.184 "sha384", 00:23:31.184 "sha512" 00:23:31.184 ], 00:23:31.184 "dhchap_dhgroups": [ 00:23:31.184 "null", 00:23:31.184 "ffdhe2048", 00:23:31.184 "ffdhe3072", 00:23:31.184 "ffdhe4096", 00:23:31.184 "ffdhe6144", 00:23:31.184 "ffdhe8192" 00:23:31.184 ] 00:23:31.184 } 00:23:31.184 }, 00:23:31.184 { 00:23:31.184 "method": "bdev_nvme_attach_controller", 00:23:31.184 "params": { 00:23:31.184 "name": "TLSTEST", 00:23:31.184 "trtype": "TCP", 00:23:31.184 "adrfam": "IPv4", 00:23:31.184 "traddr": "10.0.0.2", 00:23:31.184 "trsvcid": "4420", 00:23:31.184 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.184 "prchk_reftag": false, 00:23:31.184 "prchk_guard": false, 00:23:31.184 "ctrlr_loss_timeout_sec": 0, 00:23:31.184 "reconnect_delay_sec": 0, 00:23:31.184 "fast_io_fail_timeout_sec": 0, 00:23:31.184 "psk": "/tmp/tmp.XMWurxuAWE", 00:23:31.184 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.184 "hdgst": false, 00:23:31.184 "ddgst": false 00:23:31.184 } 00:23:31.184 }, 00:23:31.184 { 00:23:31.184 "method": "bdev_nvme_set_hotplug", 00:23:31.184 "params": { 00:23:31.184 "period_us": 100000, 00:23:31.184 "enable": false 00:23:31.184 } 00:23:31.184 }, 00:23:31.184 { 00:23:31.184 "method": "bdev_wait_for_examine" 00:23:31.184 } 00:23:31.184 ] 00:23:31.184 }, 00:23:31.184 { 00:23:31.184 "subsystem": "nbd", 00:23:31.184 "config": [] 00:23:31.184 } 00:23:31.184 ] 00:23:31.184 }' 00:23:31.184 20:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 4093685 00:23:31.184 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4093685 ']' 00:23:31.184 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4093685 00:23:31.184 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:31.184 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.184 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4093685 00:23:31.184 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:31.184 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:31.184 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4093685' 00:23:31.184 killing process with pid 4093685 00:23:31.184 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4093685 00:23:31.184 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.184 00:23:31.184 Latency(us) 00:23:31.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.184 =================================================================================================================== 00:23:31.184 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:31.184 [2024-07-15 20:29:09.604713] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:31.184 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4093685 00:23:31.444 20:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 4093398 00:23:31.444 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4093398 ']' 00:23:31.444 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4093398 00:23:31.444 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:31.444 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.444 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4093398 00:23:31.444 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:31.444 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:31.444 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4093398' 00:23:31.444 killing process with pid 4093398 00:23:31.444 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4093398 00:23:31.444 [2024-07-15 20:29:09.856872] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:31.444 20:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4093398 00:23:31.703 20:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:31.703 20:29:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.703 20:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:31.703 "subsystems": [ 00:23:31.703 { 00:23:31.703 "subsystem": "keyring", 00:23:31.703 "config": [] 00:23:31.703 }, 00:23:31.703 { 00:23:31.703 "subsystem": "iobuf", 00:23:31.703 "config": [ 00:23:31.703 { 00:23:31.703 "method": "iobuf_set_options", 00:23:31.703 "params": { 00:23:31.703 "small_pool_count": 8192, 00:23:31.703 "large_pool_count": 1024, 00:23:31.703 "small_bufsize": 8192, 00:23:31.703 "large_bufsize": 135168 00:23:31.703 } 00:23:31.703 } 00:23:31.703 ] 00:23:31.703 }, 00:23:31.703 { 00:23:31.703 "subsystem": "sock", 00:23:31.703 "config": [ 00:23:31.703 { 00:23:31.703 "method": "sock_set_default_impl", 00:23:31.703 "params": { 00:23:31.703 "impl_name": "posix" 00:23:31.703 } 00:23:31.703 }, 00:23:31.703 { 00:23:31.703 "method": "sock_impl_set_options", 00:23:31.703 "params": { 00:23:31.703 "impl_name": "ssl", 00:23:31.703 "recv_buf_size": 4096, 00:23:31.703 "send_buf_size": 4096, 00:23:31.703 "enable_recv_pipe": true, 00:23:31.703 "enable_quickack": false, 00:23:31.703 "enable_placement_id": 0, 00:23:31.703 "enable_zerocopy_send_server": true, 00:23:31.703 "enable_zerocopy_send_client": false, 00:23:31.703 "zerocopy_threshold": 0, 00:23:31.703 "tls_version": 0, 00:23:31.703 "enable_ktls": false 00:23:31.703 } 00:23:31.703 }, 00:23:31.703 { 00:23:31.703 "method": "sock_impl_set_options", 00:23:31.703 "params": { 00:23:31.703 "impl_name": "posix", 00:23:31.703 "recv_buf_size": 2097152, 00:23:31.703 "send_buf_size": 2097152, 00:23:31.703 "enable_recv_pipe": true, 00:23:31.703 "enable_quickack": false, 00:23:31.703 "enable_placement_id": 0, 00:23:31.703 "enable_zerocopy_send_server": true, 00:23:31.703 "enable_zerocopy_send_client": false, 00:23:31.703 "zerocopy_threshold": 0, 00:23:31.703 "tls_version": 0, 00:23:31.703 "enable_ktls": false 00:23:31.703 } 00:23:31.703 } 00:23:31.703 ] 00:23:31.703 }, 00:23:31.703 { 00:23:31.703 "subsystem": "vmd", 00:23:31.703 "config": [] 00:23:31.703 }, 00:23:31.703 { 00:23:31.703 "subsystem": "accel", 00:23:31.703 "config": [ 00:23:31.703 { 00:23:31.703 "method": "accel_set_options", 00:23:31.703 "params": { 00:23:31.703 "small_cache_size": 128, 00:23:31.703 "large_cache_size": 16, 00:23:31.703 "task_count": 2048, 00:23:31.703 "sequence_count": 2048, 00:23:31.703 "buf_count": 2048 00:23:31.703 } 00:23:31.703 } 00:23:31.703 ] 00:23:31.703 }, 00:23:31.703 { 00:23:31.703 "subsystem": "bdev", 00:23:31.703 "config": [ 00:23:31.703 { 00:23:31.703 "method": "bdev_set_options", 00:23:31.703 "params": { 00:23:31.703 "bdev_io_pool_size": 65535, 00:23:31.703 "bdev_io_cache_size": 256, 00:23:31.703 "bdev_auto_examine": true, 00:23:31.703 "iobuf_small_cache_size": 128, 00:23:31.703 "iobuf_large_cache_size": 16 00:23:31.703 } 00:23:31.703 }, 00:23:31.703 { 00:23:31.703 "method": "bdev_raid_set_options", 00:23:31.703 "params": { 00:23:31.703 "process_window_size_kb": 1024 00:23:31.703 } 00:23:31.703 }, 00:23:31.703 { 00:23:31.703 "method": "bdev_iscsi_set_options", 00:23:31.703 "params": { 00:23:31.703 "timeout_sec": 30 00:23:31.703 } 00:23:31.703 }, 00:23:31.703 { 00:23:31.703 "method": "bdev_nvme_set_options", 00:23:31.703 "params": { 00:23:31.703 "action_on_timeout": "none", 00:23:31.703 "timeout_us": 0, 00:23:31.703 "timeout_admin_us": 0, 00:23:31.703 "keep_alive_timeout_ms": 10000, 00:23:31.703 "arbitration_burst": 0, 00:23:31.703 "low_priority_weight": 0, 00:23:31.703 "medium_priority_weight": 0, 00:23:31.703 "high_priority_weight": 0, 00:23:31.703 "nvme_adminq_poll_period_us": 10000, 00:23:31.703 "nvme_ioq_poll_period_us": 0, 00:23:31.703 "io_queue_requests": 0, 00:23:31.703 "delay_cmd_submit": true, 00:23:31.703 "transport_retry_count": 4, 00:23:31.703 "bdev_retry_count": 3, 00:23:31.703 "transport_ack_timeout": 0, 00:23:31.703 "ctrlr_loss_timeout_sec": 0, 00:23:31.703 "reconnect_delay_sec": 0, 00:23:31.703 "fast_io_fail_timeout_sec": 0, 00:23:31.703 "disable_auto_failback": false, 00:23:31.703 "generate_uuids": false, 00:23:31.703 "transport_tos": 0, 00:23:31.703 "nvme_error_stat": false, 00:23:31.703 "rdma_srq_size": 0, 00:23:31.703 "io_path_stat": false, 00:23:31.703 "allow_accel_sequence": false, 00:23:31.703 "rdma_max_cq_size": 0, 00:23:31.703 "rdma_cm_event_timeout_ms": 0, 00:23:31.703 "dhchap_digests": [ 00:23:31.703 "sha256", 00:23:31.703 "sha384", 00:23:31.703 "sha512" 00:23:31.703 ], 00:23:31.703 "dhchap_dhgroups": [ 00:23:31.703 "null", 00:23:31.703 "ffdhe2048", 00:23:31.703 "ffdhe3072", 00:23:31.703 "ffdhe4096", 00:23:31.703 "ffdhe6144", 00:23:31.703 "ffdhe8192" 00:23:31.703 ] 00:23:31.703 } 00:23:31.703 }, 00:23:31.703 { 00:23:31.703 "method": "bdev_nvme_set_hotplug", 00:23:31.703 "params": { 00:23:31.703 "period_us": 100000, 00:23:31.703 "enable": false 00:23:31.703 } 00:23:31.703 }, 00:23:31.703 { 00:23:31.703 "method": "bdev_malloc_create", 00:23:31.703 "params": { 00:23:31.703 "name": "malloc0", 00:23:31.703 "num_blocks": 8192, 00:23:31.703 "block_size": 4096, 00:23:31.703 "physical_block_size": 4096, 00:23:31.703 "uuid": "66795f89-c1a3-4199-9134-8289f01897e5", 00:23:31.703 "optimal_io_boundary": 0 00:23:31.703 } 00:23:31.703 }, 00:23:31.703 { 00:23:31.703 "method": "bdev_wait_for_examine" 00:23:31.703 } 00:23:31.703 ] 00:23:31.703 }, 00:23:31.704 { 00:23:31.704 "subsystem": "nbd", 00:23:31.704 "config": [] 00:23:31.704 }, 00:23:31.704 { 00:23:31.704 "subsystem": "scheduler", 00:23:31.704 "config": [ 00:23:31.704 { 00:23:31.704 "method": "framework_set_scheduler", 00:23:31.704 "params": { 00:23:31.704 "name": "static" 00:23:31.704 } 00:23:31.704 } 00:23:31.704 ] 00:23:31.704 }, 00:23:31.704 { 00:23:31.704 "subsystem": "nvmf", 00:23:31.704 "config": [ 00:23:31.704 { 00:23:31.704 "method": "nvmf_set_config", 00:23:31.704 "params": { 00:23:31.704 "discovery_filter": "match_any", 00:23:31.704 "admin_cmd_passthru": { 00:23:31.704 "identify_ctrlr": false 00:23:31.704 } 00:23:31.704 } 00:23:31.704 }, 00:23:31.704 { 00:23:31.704 "method": "nvmf_set_max_subsystems", 00:23:31.704 "params": { 00:23:31.704 "max_subsystems": 1024 00:23:31.704 } 00:23:31.704 }, 00:23:31.704 { 00:23:31.704 "method": "nvmf_set_crdt", 00:23:31.704 "params": { 00:23:31.704 "crdt1": 0, 00:23:31.704 "crdt2": 0, 00:23:31.704 "crdt3": 0 00:23:31.704 } 00:23:31.704 }, 00:23:31.704 { 00:23:31.704 "method": "nvmf_create_transport", 00:23:31.704 "params": { 00:23:31.704 "trtype": "TCP", 00:23:31.704 "max_queue_depth": 128, 00:23:31.704 "max_io_qpairs_per_ctrlr": 127, 00:23:31.704 "in_capsule_data_size": 4096, 00:23:31.704 "max_io_size": 131072, 00:23:31.704 "io_unit_size": 131072, 00:23:31.704 "max_aq_depth": 128, 00:23:31.704 "num_shared_buffers": 511, 00:23:31.704 "buf_cache_size": 4294967295, 00:23:31.704 "dif_insert_or_strip": false, 00:23:31.704 "zcopy": false, 00:23:31.704 "c2h_success": false, 00:23:31.704 "sock_priority": 0, 00:23:31.704 "abort_timeout_sec": 1, 00:23:31.704 "ack_timeout": 0, 00:23:31.704 "data_wr_pool_size": 0 00:23:31.704 } 00:23:31.704 }, 00:23:31.704 { 00:23:31.704 "method": "nvmf_create_subsystem", 00:23:31.704 "params": { 00:23:31.704 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.704 "allow_any_host": false, 00:23:31.704 "serial_number": "SPDK00000000000001", 00:23:31.704 "model_number": "SPDK bdev Controller", 00:23:31.704 "max_namespaces": 10, 00:23:31.704 "min_cntlid": 1, 00:23:31.704 "max_cntlid": 65519, 00:23:31.704 "ana_reporting": false 00:23:31.704 } 00:23:31.704 }, 00:23:31.704 { 00:23:31.704 "method": "nvmf_subsystem_add_host", 00:23:31.704 "params": { 00:23:31.704 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.704 "host": "nqn.2016-06.io.spdk:host1", 00:23:31.704 "psk": "/tmp/tmp.XMWurxuAWE" 00:23:31.704 } 00:23:31.704 }, 00:23:31.704 { 00:23:31.704 "method": "nvmf_subsystem_add_ns", 00:23:31.704 "params": { 00:23:31.704 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.704 "namespace": { 00:23:31.704 "nsid": 1, 00:23:31.704 "bdev_name": "malloc0", 00:23:31.704 "nguid": "66795F89C1A3419991348289F01897E5", 00:23:31.704 "uuid": "66795f89-c1a3-4199-9134-8289f01897e5", 00:23:31.704 "no_auto_visible": false 00:23:31.704 } 00:23:31.704 } 00:23:31.704 }, 00:23:31.704 { 00:23:31.704 "method": "nvmf_subsystem_add_listener", 00:23:31.704 "params": { 00:23:31.704 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.704 "listen_address": { 00:23:31.704 "trtype": "TCP", 00:23:31.704 "adrfam": "IPv4", 00:23:31.704 "traddr": "10.0.0.2", 00:23:31.704 "trsvcid": "4420" 00:23:31.704 }, 00:23:31.704 "secure_channel": true 00:23:31.704 } 00:23:31.704 } 00:23:31.704 ] 00:23:31.704 } 00:23:31.704 ] 00:23:31.704 }' 00:23:31.704 20:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:31.704 20:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.704 20:29:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4093843 00:23:31.704 20:29:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:31.704 20:29:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4093843 00:23:31.704 20:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4093843 ']' 00:23:31.704 20:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.704 20:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:31.704 20:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.704 20:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:31.704 20:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.704 [2024-07-15 20:29:10.168360] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:31.704 [2024-07-15 20:29:10.168450] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.704 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.962 [2024-07-15 20:29:10.237815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.962 [2024-07-15 20:29:10.329831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.962 [2024-07-15 20:29:10.329904] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.962 [2024-07-15 20:29:10.329939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.962 [2024-07-15 20:29:10.329951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.962 [2024-07-15 20:29:10.329962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.962 [2024-07-15 20:29:10.330042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.220 [2024-07-15 20:29:10.562218] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.220 [2024-07-15 20:29:10.578174] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:32.220 [2024-07-15 20:29:10.594237] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.220 [2024-07-15 20:29:10.609057] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=4093995 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 4093995 /var/tmp/bdevperf.sock 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4093995 ']' 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.784 20:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:32.784 "subsystems": [ 00:23:32.784 { 00:23:32.784 "subsystem": "keyring", 00:23:32.784 "config": [] 00:23:32.784 }, 00:23:32.785 { 00:23:32.785 "subsystem": "iobuf", 00:23:32.785 "config": [ 00:23:32.785 { 00:23:32.785 "method": "iobuf_set_options", 00:23:32.785 "params": { 00:23:32.785 "small_pool_count": 8192, 00:23:32.785 "large_pool_count": 1024, 00:23:32.785 "small_bufsize": 8192, 00:23:32.785 "large_bufsize": 135168 00:23:32.785 } 00:23:32.785 } 00:23:32.785 ] 00:23:32.785 }, 00:23:32.785 { 00:23:32.785 "subsystem": "sock", 00:23:32.785 "config": [ 00:23:32.785 { 00:23:32.785 "method": "sock_set_default_impl", 00:23:32.785 "params": { 00:23:32.785 "impl_name": "posix" 00:23:32.785 } 00:23:32.785 }, 00:23:32.785 { 00:23:32.785 "method": "sock_impl_set_options", 00:23:32.785 "params": { 00:23:32.785 "impl_name": "ssl", 00:23:32.785 "recv_buf_size": 4096, 00:23:32.785 "send_buf_size": 4096, 00:23:32.785 "enable_recv_pipe": true, 00:23:32.785 "enable_quickack": false, 00:23:32.785 "enable_placement_id": 0, 00:23:32.785 "enable_zerocopy_send_server": true, 00:23:32.785 "enable_zerocopy_send_client": false, 00:23:32.785 "zerocopy_threshold": 0, 00:23:32.785 "tls_version": 0, 00:23:32.785 "enable_ktls": false 00:23:32.785 } 00:23:32.785 }, 00:23:32.785 { 00:23:32.785 "method": "sock_impl_set_options", 00:23:32.785 "params": { 00:23:32.785 "impl_name": "posix", 00:23:32.785 "recv_buf_size": 2097152, 00:23:32.785 "send_buf_size": 2097152, 00:23:32.785 "enable_recv_pipe": true, 00:23:32.785 "enable_quickack": false, 00:23:32.785 "enable_placement_id": 0, 00:23:32.785 "enable_zerocopy_send_server": true, 00:23:32.785 "enable_zerocopy_send_client": false, 00:23:32.785 "zerocopy_threshold": 0, 00:23:32.785 "tls_version": 0, 00:23:32.785 "enable_ktls": false 00:23:32.785 } 00:23:32.785 } 00:23:32.785 ] 00:23:32.785 }, 00:23:32.785 { 00:23:32.785 "subsystem": "vmd", 00:23:32.785 "config": [] 00:23:32.785 }, 00:23:32.785 { 00:23:32.785 "subsystem": "accel", 00:23:32.785 "config": [ 00:23:32.785 { 00:23:32.785 "method": "accel_set_options", 00:23:32.785 "params": { 00:23:32.785 "small_cache_size": 128, 00:23:32.785 "large_cache_size": 16, 00:23:32.785 "task_count": 2048, 00:23:32.785 "sequence_count": 2048, 00:23:32.785 "buf_count": 2048 00:23:32.785 } 00:23:32.785 } 00:23:32.785 ] 00:23:32.785 }, 00:23:32.785 { 00:23:32.785 "subsystem": "bdev", 00:23:32.785 "config": [ 00:23:32.785 { 00:23:32.785 "method": "bdev_set_options", 00:23:32.785 "params": { 00:23:32.785 "bdev_io_pool_size": 65535, 00:23:32.785 "bdev_io_cache_size": 256, 00:23:32.785 "bdev_auto_examine": true, 00:23:32.785 "iobuf_small_cache_size": 128, 00:23:32.785 "iobuf_large_cache_size": 16 00:23:32.785 } 00:23:32.785 }, 00:23:32.785 { 00:23:32.785 "method": "bdev_raid_set_options", 00:23:32.785 "params": { 00:23:32.785 "process_window_size_kb": 1024 00:23:32.785 } 00:23:32.785 }, 00:23:32.785 { 00:23:32.785 "method": "bdev_iscsi_set_options", 00:23:32.785 "params": { 00:23:32.785 "timeout_sec": 30 00:23:32.785 } 00:23:32.785 }, 00:23:32.785 { 00:23:32.785 "method": "bdev_nvme_set_options", 00:23:32.785 "params": { 00:23:32.785 "action_on_timeout": "none", 00:23:32.785 "timeout_us": 0, 00:23:32.785 "timeout_admin_us": 0, 00:23:32.785 "keep_alive_timeout_ms": 10000, 00:23:32.785 "arbitration_burst": 0, 00:23:32.785 "low_priority_weight": 0, 00:23:32.785 "medium_priority_weight": 0, 00:23:32.785 "high_priority_weight": 0, 00:23:32.785 "nvme_adminq_poll_period_us": 10000, 00:23:32.785 "nvme_ioq_poll_period_us": 0, 00:23:32.785 "io_queue_requests": 512, 00:23:32.785 "delay_cmd_submit": true, 00:23:32.785 "transport_retry_count": 4, 00:23:32.785 "bdev_retry_count": 3, 00:23:32.785 "transport_ack_timeout": 0, 00:23:32.785 "ctrlr_loss_timeout_sec": 0, 00:23:32.785 "reconnect_delay_sec": 0, 00:23:32.785 "fast_io_fail_timeout_sec": 0, 00:23:32.785 "disable_auto_failback": false, 00:23:32.785 "generate_uuids": false, 00:23:32.785 "transport_tos": 0, 00:23:32.785 "nvme_error_stat": false, 00:23:32.785 "rdma_srq_size": 0, 00:23:32.785 "io_path_stat": false, 00:23:32.785 "allow_accel_sequence": false, 00:23:32.785 "rdma_max_cq_size": 0, 00:23:32.785 "rdma_cm_event_timeout_ms": 0, 00:23:32.785 "dhchap_digests": [ 00:23:32.785 "sha256", 00:23:32.785 "sha384", 00:23:32.785 "sha512" 00:23:32.785 ], 00:23:32.785 "dhchap_dhgroups": [ 00:23:32.785 "null", 00:23:32.785 "ffdhe2048", 00:23:32.785 "ffdhe3072", 00:23:32.785 "ffdhe4096", 00:23:32.785 "ffdhe6144", 00:23:32.785 "ffdhe8192" 00:23:32.785 ] 00:23:32.785 } 00:23:32.785 }, 00:23:32.785 { 00:23:32.785 "method": "bdev_nvme_attach_controller", 00:23:32.785 "params": { 00:23:32.785 "name": "TLSTEST", 00:23:32.785 "trtype": "TCP", 00:23:32.785 "adrfam": "IPv4", 00:23:32.785 "traddr": "10.0.0.2", 00:23:32.785 "trsvcid": "4420", 00:23:32.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.785 "prchk_reftag": false, 00:23:32.785 "prchk_guard": false, 00:23:32.785 "ctrlr_loss_timeout_sec": 0, 00:23:32.785 "reconnect_delay_sec": 0, 00:23:32.785 "fast_io_fail_timeout_sec": 0, 00:23:32.785 "psk": "/tmp/tmp.XMWurxuAWE", 00:23:32.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.785 "hdgst": false, 00:23:32.785 "ddgst": false 00:23:32.785 } 00:23:32.785 }, 00:23:32.785 { 00:23:32.785 "method": "bdev_nvme_set_hotplug", 00:23:32.785 "params": { 00:23:32.785 "period_us": 100000, 00:23:32.785 "enable": false 00:23:32.785 } 00:23:32.785 }, 00:23:32.785 { 00:23:32.785 "method": "bdev_wait_for_examine" 00:23:32.785 } 00:23:32.785 ] 00:23:32.785 }, 00:23:32.785 { 00:23:32.785 "subsystem": "nbd", 00:23:32.785 "config": [] 00:23:32.785 } 00:23:32.785 ] 00:23:32.785 }' 00:23:32.785 20:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.785 20:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.785 [2024-07-15 20:29:11.246955] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:32.785 [2024-07-15 20:29:11.247035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4093995 ] 00:23:32.785 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.785 [2024-07-15 20:29:11.304513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.049 [2024-07-15 20:29:11.389535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.049 [2024-07-15 20:29:11.550451] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.049 [2024-07-15 20:29:11.550579] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:33.983 20:29:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.983 20:29:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:33.983 20:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:33.983 Running I/O for 10 seconds... 00:23:43.958 00:23:43.958 Latency(us) 00:23:43.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.958 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:43.958 Verification LBA range: start 0x0 length 0x2000 00:23:43.958 TLSTESTn1 : 10.06 2015.82 7.87 0.00 0.00 63310.17 11505.21 100973.99 00:23:43.958 =================================================================================================================== 00:23:43.958 Total : 2015.82 7.87 0.00 0.00 63310.17 11505.21 100973.99 00:23:43.958 0 00:23:43.958 20:29:22 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:43.958 20:29:22 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 4093995 00:23:43.958 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4093995 ']' 00:23:43.958 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4093995 00:23:43.958 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:43.958 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:43.958 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4093995 00:23:43.958 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:43.958 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:43.958 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4093995' 00:23:43.958 killing process with pid 4093995 00:23:43.958 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4093995 00:23:43.958 Received shutdown signal, test time was about 10.000000 seconds 00:23:43.958 00:23:43.958 Latency(us) 00:23:43.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.958 =================================================================================================================== 00:23:43.958 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:43.958 [2024-07-15 20:29:22.451280] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:43.958 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4093995 00:23:44.216 20:29:22 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 4093843 00:23:44.216 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4093843 ']' 00:23:44.216 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4093843 00:23:44.216 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:44.216 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.216 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4093843 00:23:44.216 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:44.216 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:44.216 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4093843' 00:23:44.216 killing process with pid 4093843 00:23:44.216 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4093843 00:23:44.216 [2024-07-15 20:29:22.705667] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:44.216 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4093843 00:23:44.475 20:29:22 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:44.475 20:29:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:44.475 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:44.475 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.475 20:29:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4095442 00:23:44.475 20:29:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:44.475 20:29:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4095442 00:23:44.475 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4095442 ']' 00:23:44.475 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.475 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.475 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.475 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.475 20:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.475 [2024-07-15 20:29:22.996628] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:44.475 [2024-07-15 20:29:22.996714] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.733 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.733 [2024-07-15 20:29:23.066698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.733 [2024-07-15 20:29:23.154391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.733 [2024-07-15 20:29:23.154456] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.733 [2024-07-15 20:29:23.154474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.733 [2024-07-15 20:29:23.154488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.733 [2024-07-15 20:29:23.154499] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.733 [2024-07-15 20:29:23.154529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.991 20:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.991 20:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:44.991 20:29:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:44.991 20:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:44.991 20:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.991 20:29:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.991 20:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.XMWurxuAWE 00:23:44.991 20:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XMWurxuAWE 00:23:44.991 20:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:45.248 [2024-07-15 20:29:23.524323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.248 20:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:45.507 20:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:45.764 [2024-07-15 20:29:24.101922] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:45.764 [2024-07-15 20:29:24.102160] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.764 20:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:46.022 malloc0 00:23:46.022 20:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:46.280 20:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XMWurxuAWE 00:23:46.538 [2024-07-15 20:29:24.984174] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:46.538 20:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=4095609 00:23:46.538 20:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:46.538 20:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:46.538 20:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 4095609 /var/tmp/bdevperf.sock 00:23:46.538 20:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4095609 ']' 00:23:46.538 20:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.538 20:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.538 20:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.538 20:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.538 20:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.538 [2024-07-15 20:29:25.050119] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:46.538 [2024-07-15 20:29:25.050260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4095609 ] 00:23:46.796 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.796 [2024-07-15 20:29:25.121325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.796 [2024-07-15 20:29:25.215075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.796 20:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.796 20:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:46.796 20:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XMWurxuAWE 00:23:47.056 20:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:47.316 [2024-07-15 20:29:25.784563] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:47.574 nvme0n1 00:23:47.574 20:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:47.574 Running I/O for 1 seconds... 00:23:48.507 00:23:48.507 Latency(us) 00:23:48.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.507 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:48.507 Verification LBA range: start 0x0 length 0x2000 00:23:48.507 nvme0n1 : 1.06 1890.75 7.39 0.00 0.00 66235.20 6359.42 102527.43 00:23:48.507 =================================================================================================================== 00:23:48.507 Total : 1890.75 7.39 0.00 0.00 66235.20 6359.42 102527.43 00:23:48.507 0 00:23:48.765 20:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 4095609 00:23:48.765 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4095609 ']' 00:23:48.765 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4095609 00:23:48.765 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:48.765 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.765 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4095609 00:23:48.765 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:48.765 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:48.765 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4095609' 00:23:48.765 killing process with pid 4095609 00:23:48.765 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4095609 00:23:48.765 Received shutdown signal, test time was about 1.000000 seconds 00:23:48.765 00:23:48.765 Latency(us) 00:23:48.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.765 =================================================================================================================== 00:23:48.765 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.765 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4095609 00:23:49.023 20:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 4095442 00:23:49.023 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4095442 ']' 00:23:49.023 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4095442 00:23:49.023 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:49.023 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.023 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4095442 00:23:49.023 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.023 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.023 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4095442' 00:23:49.023 killing process with pid 4095442 00:23:49.023 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4095442 00:23:49.023 [2024-07-15 20:29:27.337970] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:49.023 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4095442 00:23:49.281 20:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:23:49.281 20:29:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:49.281 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:49.281 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.281 20:29:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4096003 00:23:49.281 20:29:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:49.281 20:29:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4096003 00:23:49.281 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4096003 ']' 00:23:49.281 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.281 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.281 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.281 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.281 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.281 [2024-07-15 20:29:27.610290] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:49.281 [2024-07-15 20:29:27.610372] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.281 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.281 [2024-07-15 20:29:27.674920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.281 [2024-07-15 20:29:27.761968] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.281 [2024-07-15 20:29:27.762025] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.281 [2024-07-15 20:29:27.762045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.281 [2024-07-15 20:29:27.762057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.281 [2024-07-15 20:29:27.762067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.282 [2024-07-15 20:29:27.762096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.539 [2024-07-15 20:29:27.910595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.539 malloc0 00:23:49.539 [2024-07-15 20:29:27.943262] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:49.539 [2024-07-15 20:29:27.943536] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=4096032 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 4096032 /var/tmp/bdevperf.sock 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4096032 ']' 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.539 20:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.539 [2024-07-15 20:29:28.013272] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:49.539 [2024-07-15 20:29:28.013337] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096032 ] 00:23:49.539 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.797 [2024-07-15 20:29:28.074377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.797 [2024-07-15 20:29:28.165133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.797 20:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.797 20:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:49.797 20:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XMWurxuAWE 00:23:50.060 20:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:50.365 [2024-07-15 20:29:28.813947] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.622 nvme0n1 00:23:50.622 20:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:50.622 Running I/O for 1 seconds... 00:23:51.554 00:23:51.554 Latency(us) 00:23:51.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.554 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:51.554 Verification LBA range: start 0x0 length 0x2000 00:23:51.554 nvme0n1 : 1.06 1875.50 7.33 0.00 0.00 66690.28 6189.51 95536.92 00:23:51.554 =================================================================================================================== 00:23:51.554 Total : 1875.50 7.33 0.00 0.00 66690.28 6189.51 95536.92 00:23:51.554 0 00:23:51.811 20:29:30 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:23:51.811 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.811 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.811 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.811 20:29:30 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:23:51.811 "subsystems": [ 00:23:51.811 { 00:23:51.811 "subsystem": "keyring", 00:23:51.811 "config": [ 00:23:51.811 { 00:23:51.811 "method": "keyring_file_add_key", 00:23:51.811 "params": { 00:23:51.811 "name": "key0", 00:23:51.811 "path": "/tmp/tmp.XMWurxuAWE" 00:23:51.811 } 00:23:51.811 } 00:23:51.811 ] 00:23:51.811 }, 00:23:51.811 { 00:23:51.811 "subsystem": "iobuf", 00:23:51.811 "config": [ 00:23:51.811 { 00:23:51.811 "method": "iobuf_set_options", 00:23:51.811 "params": { 00:23:51.811 "small_pool_count": 8192, 00:23:51.811 "large_pool_count": 1024, 00:23:51.811 "small_bufsize": 8192, 00:23:51.811 "large_bufsize": 135168 00:23:51.811 } 00:23:51.811 } 00:23:51.811 ] 00:23:51.811 }, 00:23:51.811 { 00:23:51.811 "subsystem": "sock", 00:23:51.811 "config": [ 00:23:51.811 { 00:23:51.811 "method": "sock_set_default_impl", 00:23:51.811 "params": { 00:23:51.811 "impl_name": "posix" 00:23:51.811 } 00:23:51.811 }, 00:23:51.811 { 00:23:51.811 "method": "sock_impl_set_options", 00:23:51.811 "params": { 00:23:51.811 "impl_name": "ssl", 00:23:51.811 "recv_buf_size": 4096, 00:23:51.811 "send_buf_size": 4096, 00:23:51.811 "enable_recv_pipe": true, 00:23:51.811 "enable_quickack": false, 00:23:51.811 "enable_placement_id": 0, 00:23:51.811 "enable_zerocopy_send_server": true, 00:23:51.811 "enable_zerocopy_send_client": false, 00:23:51.811 "zerocopy_threshold": 0, 00:23:51.811 "tls_version": 0, 00:23:51.811 "enable_ktls": false 00:23:51.811 } 00:23:51.811 }, 00:23:51.811 { 00:23:51.811 "method": "sock_impl_set_options", 00:23:51.811 "params": { 00:23:51.811 "impl_name": "posix", 00:23:51.811 "recv_buf_size": 2097152, 00:23:51.811 "send_buf_size": 2097152, 00:23:51.811 "enable_recv_pipe": true, 00:23:51.811 "enable_quickack": false, 00:23:51.811 "enable_placement_id": 0, 00:23:51.811 "enable_zerocopy_send_server": true, 00:23:51.811 "enable_zerocopy_send_client": false, 00:23:51.811 "zerocopy_threshold": 0, 00:23:51.811 "tls_version": 0, 00:23:51.811 "enable_ktls": false 00:23:51.811 } 00:23:51.811 } 00:23:51.811 ] 00:23:51.811 }, 00:23:51.811 { 00:23:51.811 "subsystem": "vmd", 00:23:51.811 "config": [] 00:23:51.811 }, 00:23:51.811 { 00:23:51.811 "subsystem": "accel", 00:23:51.811 "config": [ 00:23:51.811 { 00:23:51.811 "method": "accel_set_options", 00:23:51.811 "params": { 00:23:51.811 "small_cache_size": 128, 00:23:51.811 "large_cache_size": 16, 00:23:51.812 "task_count": 2048, 00:23:51.812 "sequence_count": 2048, 00:23:51.812 "buf_count": 2048 00:23:51.812 } 00:23:51.812 } 00:23:51.812 ] 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "subsystem": "bdev", 00:23:51.812 "config": [ 00:23:51.812 { 00:23:51.812 "method": "bdev_set_options", 00:23:51.812 "params": { 00:23:51.812 "bdev_io_pool_size": 65535, 00:23:51.812 "bdev_io_cache_size": 256, 00:23:51.812 "bdev_auto_examine": true, 00:23:51.812 "iobuf_small_cache_size": 128, 00:23:51.812 "iobuf_large_cache_size": 16 00:23:51.812 } 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "method": "bdev_raid_set_options", 00:23:51.812 "params": { 00:23:51.812 "process_window_size_kb": 1024 00:23:51.812 } 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "method": "bdev_iscsi_set_options", 00:23:51.812 "params": { 00:23:51.812 "timeout_sec": 30 00:23:51.812 } 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "method": "bdev_nvme_set_options", 00:23:51.812 "params": { 00:23:51.812 "action_on_timeout": "none", 00:23:51.812 "timeout_us": 0, 00:23:51.812 "timeout_admin_us": 0, 00:23:51.812 "keep_alive_timeout_ms": 10000, 00:23:51.812 "arbitration_burst": 0, 00:23:51.812 "low_priority_weight": 0, 00:23:51.812 "medium_priority_weight": 0, 00:23:51.812 "high_priority_weight": 0, 00:23:51.812 "nvme_adminq_poll_period_us": 10000, 00:23:51.812 "nvme_ioq_poll_period_us": 0, 00:23:51.812 "io_queue_requests": 0, 00:23:51.812 "delay_cmd_submit": true, 00:23:51.812 "transport_retry_count": 4, 00:23:51.812 "bdev_retry_count": 3, 00:23:51.812 "transport_ack_timeout": 0, 00:23:51.812 "ctrlr_loss_timeout_sec": 0, 00:23:51.812 "reconnect_delay_sec": 0, 00:23:51.812 "fast_io_fail_timeout_sec": 0, 00:23:51.812 "disable_auto_failback": false, 00:23:51.812 "generate_uuids": false, 00:23:51.812 "transport_tos": 0, 00:23:51.812 "nvme_error_stat": false, 00:23:51.812 "rdma_srq_size": 0, 00:23:51.812 "io_path_stat": false, 00:23:51.812 "allow_accel_sequence": false, 00:23:51.812 "rdma_max_cq_size": 0, 00:23:51.812 "rdma_cm_event_timeout_ms": 0, 00:23:51.812 "dhchap_digests": [ 00:23:51.812 "sha256", 00:23:51.812 "sha384", 00:23:51.812 "sha512" 00:23:51.812 ], 00:23:51.812 "dhchap_dhgroups": [ 00:23:51.812 "null", 00:23:51.812 "ffdhe2048", 00:23:51.812 "ffdhe3072", 00:23:51.812 "ffdhe4096", 00:23:51.812 "ffdhe6144", 00:23:51.812 "ffdhe8192" 00:23:51.812 ] 00:23:51.812 } 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "method": "bdev_nvme_set_hotplug", 00:23:51.812 "params": { 00:23:51.812 "period_us": 100000, 00:23:51.812 "enable": false 00:23:51.812 } 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "method": "bdev_malloc_create", 00:23:51.812 "params": { 00:23:51.812 "name": "malloc0", 00:23:51.812 "num_blocks": 8192, 00:23:51.812 "block_size": 4096, 00:23:51.812 "physical_block_size": 4096, 00:23:51.812 "uuid": "a72d1472-59bb-4520-af06-3ff3f5d0338d", 00:23:51.812 "optimal_io_boundary": 0 00:23:51.812 } 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "method": "bdev_wait_for_examine" 00:23:51.812 } 00:23:51.812 ] 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "subsystem": "nbd", 00:23:51.812 "config": [] 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "subsystem": "scheduler", 00:23:51.812 "config": [ 00:23:51.812 { 00:23:51.812 "method": "framework_set_scheduler", 00:23:51.812 "params": { 00:23:51.812 "name": "static" 00:23:51.812 } 00:23:51.812 } 00:23:51.812 ] 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "subsystem": "nvmf", 00:23:51.812 "config": [ 00:23:51.812 { 00:23:51.812 "method": "nvmf_set_config", 00:23:51.812 "params": { 00:23:51.812 "discovery_filter": "match_any", 00:23:51.812 "admin_cmd_passthru": { 00:23:51.812 "identify_ctrlr": false 00:23:51.812 } 00:23:51.812 } 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "method": "nvmf_set_max_subsystems", 00:23:51.812 "params": { 00:23:51.812 "max_subsystems": 1024 00:23:51.812 } 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "method": "nvmf_set_crdt", 00:23:51.812 "params": { 00:23:51.812 "crdt1": 0, 00:23:51.812 "crdt2": 0, 00:23:51.812 "crdt3": 0 00:23:51.812 } 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "method": "nvmf_create_transport", 00:23:51.812 "params": { 00:23:51.812 "trtype": "TCP", 00:23:51.812 "max_queue_depth": 128, 00:23:51.812 "max_io_qpairs_per_ctrlr": 127, 00:23:51.812 "in_capsule_data_size": 4096, 00:23:51.812 "max_io_size": 131072, 00:23:51.812 "io_unit_size": 131072, 00:23:51.812 "max_aq_depth": 128, 00:23:51.812 "num_shared_buffers": 511, 00:23:51.812 "buf_cache_size": 4294967295, 00:23:51.812 "dif_insert_or_strip": false, 00:23:51.812 "zcopy": false, 00:23:51.812 "c2h_success": false, 00:23:51.812 "sock_priority": 0, 00:23:51.812 "abort_timeout_sec": 1, 00:23:51.812 "ack_timeout": 0, 00:23:51.812 "data_wr_pool_size": 0 00:23:51.812 } 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "method": "nvmf_create_subsystem", 00:23:51.812 "params": { 00:23:51.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.812 "allow_any_host": false, 00:23:51.812 "serial_number": "00000000000000000000", 00:23:51.812 "model_number": "SPDK bdev Controller", 00:23:51.812 "max_namespaces": 32, 00:23:51.812 "min_cntlid": 1, 00:23:51.812 "max_cntlid": 65519, 00:23:51.812 "ana_reporting": false 00:23:51.812 } 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "method": "nvmf_subsystem_add_host", 00:23:51.812 "params": { 00:23:51.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.812 "host": "nqn.2016-06.io.spdk:host1", 00:23:51.812 "psk": "key0" 00:23:51.812 } 00:23:51.812 }, 00:23:51.812 { 00:23:51.812 "method": "nvmf_subsystem_add_ns", 00:23:51.812 "params": { 00:23:51.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.812 "namespace": { 00:23:51.812 "nsid": 1, 00:23:51.812 "bdev_name": "malloc0", 00:23:51.812 "nguid": "A72D147259BB4520AF063FF3F5D0338D", 00:23:51.812 "uuid": "a72d1472-59bb-4520-af06-3ff3f5d0338d", 00:23:51.812 "no_auto_visible": false 00:23:51.812 } 00:23:51.813 } 00:23:51.813 }, 00:23:51.813 { 00:23:51.813 "method": "nvmf_subsystem_add_listener", 00:23:51.813 "params": { 00:23:51.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.813 "listen_address": { 00:23:51.813 "trtype": "TCP", 00:23:51.813 "adrfam": "IPv4", 00:23:51.813 "traddr": "10.0.0.2", 00:23:51.813 "trsvcid": "4420" 00:23:51.813 }, 00:23:51.813 "secure_channel": false, 00:23:51.813 "sock_impl": "ssl" 00:23:51.813 } 00:23:51.813 } 00:23:51.813 ] 00:23:51.813 } 00:23:51.813 ] 00:23:51.813 }' 00:23:51.813 20:29:30 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:52.071 20:29:30 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:23:52.071 "subsystems": [ 00:23:52.071 { 00:23:52.071 "subsystem": "keyring", 00:23:52.071 "config": [ 00:23:52.071 { 00:23:52.071 "method": "keyring_file_add_key", 00:23:52.071 "params": { 00:23:52.071 "name": "key0", 00:23:52.071 "path": "/tmp/tmp.XMWurxuAWE" 00:23:52.071 } 00:23:52.071 } 00:23:52.071 ] 00:23:52.071 }, 00:23:52.071 { 00:23:52.071 "subsystem": "iobuf", 00:23:52.071 "config": [ 00:23:52.071 { 00:23:52.071 "method": "iobuf_set_options", 00:23:52.071 "params": { 00:23:52.071 "small_pool_count": 8192, 00:23:52.071 "large_pool_count": 1024, 00:23:52.071 "small_bufsize": 8192, 00:23:52.071 "large_bufsize": 135168 00:23:52.071 } 00:23:52.071 } 00:23:52.071 ] 00:23:52.071 }, 00:23:52.071 { 00:23:52.071 "subsystem": "sock", 00:23:52.071 "config": [ 00:23:52.071 { 00:23:52.071 "method": "sock_set_default_impl", 00:23:52.071 "params": { 00:23:52.071 "impl_name": "posix" 00:23:52.071 } 00:23:52.071 }, 00:23:52.071 { 00:23:52.071 "method": "sock_impl_set_options", 00:23:52.071 "params": { 00:23:52.071 "impl_name": "ssl", 00:23:52.071 "recv_buf_size": 4096, 00:23:52.071 "send_buf_size": 4096, 00:23:52.071 "enable_recv_pipe": true, 00:23:52.071 "enable_quickack": false, 00:23:52.071 "enable_placement_id": 0, 00:23:52.071 "enable_zerocopy_send_server": true, 00:23:52.071 "enable_zerocopy_send_client": false, 00:23:52.071 "zerocopy_threshold": 0, 00:23:52.071 "tls_version": 0, 00:23:52.071 "enable_ktls": false 00:23:52.071 } 00:23:52.071 }, 00:23:52.071 { 00:23:52.071 "method": "sock_impl_set_options", 00:23:52.071 "params": { 00:23:52.071 "impl_name": "posix", 00:23:52.071 "recv_buf_size": 2097152, 00:23:52.071 "send_buf_size": 2097152, 00:23:52.071 "enable_recv_pipe": true, 00:23:52.071 "enable_quickack": false, 00:23:52.071 "enable_placement_id": 0, 00:23:52.071 "enable_zerocopy_send_server": true, 00:23:52.071 "enable_zerocopy_send_client": false, 00:23:52.071 "zerocopy_threshold": 0, 00:23:52.071 "tls_version": 0, 00:23:52.071 "enable_ktls": false 00:23:52.071 } 00:23:52.071 } 00:23:52.071 ] 00:23:52.071 }, 00:23:52.071 { 00:23:52.071 "subsystem": "vmd", 00:23:52.071 "config": [] 00:23:52.071 }, 00:23:52.071 { 00:23:52.071 "subsystem": "accel", 00:23:52.071 "config": [ 00:23:52.071 { 00:23:52.071 "method": "accel_set_options", 00:23:52.071 "params": { 00:23:52.071 "small_cache_size": 128, 00:23:52.071 "large_cache_size": 16, 00:23:52.071 "task_count": 2048, 00:23:52.071 "sequence_count": 2048, 00:23:52.071 "buf_count": 2048 00:23:52.071 } 00:23:52.071 } 00:23:52.071 ] 00:23:52.071 }, 00:23:52.071 { 00:23:52.071 "subsystem": "bdev", 00:23:52.071 "config": [ 00:23:52.071 { 00:23:52.071 "method": "bdev_set_options", 00:23:52.071 "params": { 00:23:52.071 "bdev_io_pool_size": 65535, 00:23:52.071 "bdev_io_cache_size": 256, 00:23:52.071 "bdev_auto_examine": true, 00:23:52.071 "iobuf_small_cache_size": 128, 00:23:52.071 "iobuf_large_cache_size": 16 00:23:52.071 } 00:23:52.071 }, 00:23:52.071 { 00:23:52.071 "method": "bdev_raid_set_options", 00:23:52.071 "params": { 00:23:52.071 "process_window_size_kb": 1024 00:23:52.071 } 00:23:52.071 }, 00:23:52.071 { 00:23:52.071 "method": "bdev_iscsi_set_options", 00:23:52.071 "params": { 00:23:52.071 "timeout_sec": 30 00:23:52.071 } 00:23:52.071 }, 00:23:52.071 { 00:23:52.071 "method": "bdev_nvme_set_options", 00:23:52.071 "params": { 00:23:52.071 "action_on_timeout": "none", 00:23:52.071 "timeout_us": 0, 00:23:52.071 "timeout_admin_us": 0, 00:23:52.071 "keep_alive_timeout_ms": 10000, 00:23:52.071 "arbitration_burst": 0, 00:23:52.071 "low_priority_weight": 0, 00:23:52.071 "medium_priority_weight": 0, 00:23:52.071 "high_priority_weight": 0, 00:23:52.071 "nvme_adminq_poll_period_us": 10000, 00:23:52.071 "nvme_ioq_poll_period_us": 0, 00:23:52.071 "io_queue_requests": 512, 00:23:52.072 "delay_cmd_submit": true, 00:23:52.072 "transport_retry_count": 4, 00:23:52.072 "bdev_retry_count": 3, 00:23:52.072 "transport_ack_timeout": 0, 00:23:52.072 "ctrlr_loss_timeout_sec": 0, 00:23:52.072 "reconnect_delay_sec": 0, 00:23:52.072 "fast_io_fail_timeout_sec": 0, 00:23:52.072 "disable_auto_failback": false, 00:23:52.072 "generate_uuids": false, 00:23:52.072 "transport_tos": 0, 00:23:52.072 "nvme_error_stat": false, 00:23:52.072 "rdma_srq_size": 0, 00:23:52.072 "io_path_stat": false, 00:23:52.072 "allow_accel_sequence": false, 00:23:52.072 "rdma_max_cq_size": 0, 00:23:52.072 "rdma_cm_event_timeout_ms": 0, 00:23:52.072 "dhchap_digests": [ 00:23:52.072 "sha256", 00:23:52.072 "sha384", 00:23:52.072 "sha512" 00:23:52.072 ], 00:23:52.072 "dhchap_dhgroups": [ 00:23:52.072 "null", 00:23:52.072 "ffdhe2048", 00:23:52.072 "ffdhe3072", 00:23:52.072 "ffdhe4096", 00:23:52.072 "ffdhe6144", 00:23:52.072 "ffdhe8192" 00:23:52.072 ] 00:23:52.072 } 00:23:52.072 }, 00:23:52.072 { 00:23:52.072 "method": "bdev_nvme_attach_controller", 00:23:52.072 "params": { 00:23:52.072 "name": "nvme0", 00:23:52.072 "trtype": "TCP", 00:23:52.072 "adrfam": "IPv4", 00:23:52.072 "traddr": "10.0.0.2", 00:23:52.072 "trsvcid": "4420", 00:23:52.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.072 "prchk_reftag": false, 00:23:52.072 "prchk_guard": false, 00:23:52.072 "ctrlr_loss_timeout_sec": 0, 00:23:52.072 "reconnect_delay_sec": 0, 00:23:52.072 "fast_io_fail_timeout_sec": 0, 00:23:52.072 "psk": "key0", 00:23:52.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.072 "hdgst": false, 00:23:52.072 "ddgst": false 00:23:52.072 } 00:23:52.072 }, 00:23:52.072 { 00:23:52.072 "method": "bdev_nvme_set_hotplug", 00:23:52.072 "params": { 00:23:52.072 "period_us": 100000, 00:23:52.072 "enable": false 00:23:52.072 } 00:23:52.072 }, 00:23:52.072 { 00:23:52.072 "method": "bdev_enable_histogram", 00:23:52.072 "params": { 00:23:52.072 "name": "nvme0n1", 00:23:52.072 "enable": true 00:23:52.072 } 00:23:52.072 }, 00:23:52.072 { 00:23:52.072 "method": "bdev_wait_for_examine" 00:23:52.072 } 00:23:52.072 ] 00:23:52.072 }, 00:23:52.072 { 00:23:52.072 "subsystem": "nbd", 00:23:52.072 "config": [] 00:23:52.072 } 00:23:52.072 ] 00:23:52.072 }' 00:23:52.072 20:29:30 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 4096032 00:23:52.072 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4096032 ']' 00:23:52.072 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4096032 00:23:52.072 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:52.072 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.072 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096032 00:23:52.072 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:52.072 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:52.072 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096032' 00:23:52.072 killing process with pid 4096032 00:23:52.072 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4096032 00:23:52.072 Received shutdown signal, test time was about 1.000000 seconds 00:23:52.072 00:23:52.072 Latency(us) 00:23:52.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.072 =================================================================================================================== 00:23:52.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.072 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4096032 00:23:52.331 20:29:30 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 4096003 00:23:52.331 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4096003 ']' 00:23:52.331 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4096003 00:23:52.331 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:52.331 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.331 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096003 00:23:52.331 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:52.331 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:52.331 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096003' 00:23:52.331 killing process with pid 4096003 00:23:52.331 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4096003 00:23:52.331 20:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4096003 00:23:52.590 20:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:23:52.590 20:29:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.590 20:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:23:52.590 "subsystems": [ 00:23:52.590 { 00:23:52.590 "subsystem": "keyring", 00:23:52.590 "config": [ 00:23:52.590 { 00:23:52.590 "method": "keyring_file_add_key", 00:23:52.590 "params": { 00:23:52.590 "name": "key0", 00:23:52.590 "path": "/tmp/tmp.XMWurxuAWE" 00:23:52.590 } 00:23:52.590 } 00:23:52.590 ] 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "subsystem": "iobuf", 00:23:52.590 "config": [ 00:23:52.590 { 00:23:52.590 "method": "iobuf_set_options", 00:23:52.590 "params": { 00:23:52.590 "small_pool_count": 8192, 00:23:52.590 "large_pool_count": 1024, 00:23:52.590 "small_bufsize": 8192, 00:23:52.590 "large_bufsize": 135168 00:23:52.590 } 00:23:52.590 } 00:23:52.590 ] 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "subsystem": "sock", 00:23:52.590 "config": [ 00:23:52.590 { 00:23:52.590 "method": "sock_set_default_impl", 00:23:52.590 "params": { 00:23:52.590 "impl_name": "posix" 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "sock_impl_set_options", 00:23:52.590 "params": { 00:23:52.590 "impl_name": "ssl", 00:23:52.590 "recv_buf_size": 4096, 00:23:52.590 "send_buf_size": 4096, 00:23:52.590 "enable_recv_pipe": true, 00:23:52.590 "enable_quickack": false, 00:23:52.590 "enable_placement_id": 0, 00:23:52.590 "enable_zerocopy_send_server": true, 00:23:52.590 "enable_zerocopy_send_client": false, 00:23:52.590 "zerocopy_threshold": 0, 00:23:52.590 "tls_version": 0, 00:23:52.590 "enable_ktls": false 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "sock_impl_set_options", 00:23:52.590 "params": { 00:23:52.590 "impl_name": "posix", 00:23:52.590 "recv_buf_size": 2097152, 00:23:52.590 "send_buf_size": 2097152, 00:23:52.590 "enable_recv_pipe": true, 00:23:52.590 "enable_quickack": false, 00:23:52.590 "enable_placement_id": 0, 00:23:52.590 "enable_zerocopy_send_server": true, 00:23:52.590 "enable_zerocopy_send_client": false, 00:23:52.590 "zerocopy_threshold": 0, 00:23:52.590 "tls_version": 0, 00:23:52.590 "enable_ktls": false 00:23:52.590 } 00:23:52.590 } 00:23:52.590 ] 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "subsystem": "vmd", 00:23:52.590 "config": [] 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "subsystem": "accel", 00:23:52.590 "config": [ 00:23:52.590 { 00:23:52.590 "method": "accel_set_options", 00:23:52.590 "params": { 00:23:52.590 "small_cache_size": 128, 00:23:52.590 "large_cache_size": 16, 00:23:52.590 "task_count": 2048, 00:23:52.590 "sequence_count": 2048, 00:23:52.590 "buf_count": 2048 00:23:52.590 } 00:23:52.590 } 00:23:52.590 ] 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "subsystem": "bdev", 00:23:52.590 "config": [ 00:23:52.590 { 00:23:52.590 "method": "bdev_set_options", 00:23:52.590 "params": { 00:23:52.590 "bdev_io_pool_size": 65535, 00:23:52.590 "bdev_io_cache_size": 256, 00:23:52.590 "bdev_auto_examine": true, 00:23:52.590 "iobuf_small_cache_size": 128, 00:23:52.590 "iobuf_large_cache_size": 16 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "bdev_raid_set_options", 00:23:52.590 "params": { 00:23:52.590 "process_window_size_kb": 1024 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "bdev_iscsi_set_options", 00:23:52.590 "params": { 00:23:52.590 "timeout_sec": 30 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "bdev_nvme_set_options", 00:23:52.590 "params": { 00:23:52.590 "action_on_timeout": "none", 00:23:52.590 "timeout_us": 0, 00:23:52.590 "timeout_admin_us": 0, 00:23:52.590 "keep_alive_timeout_ms": 10000, 00:23:52.590 "arbitration_burst": 0, 00:23:52.590 "low_priority_weight": 0, 00:23:52.590 "medium_priority_weight": 0, 00:23:52.590 "high_priority_weight": 0, 00:23:52.590 "nvme_adminq_poll_period_us": 10000, 00:23:52.590 "nvme_ioq_poll_period_us": 0, 00:23:52.590 "io_queue_requests": 0, 00:23:52.590 "delay_cmd_submit": true, 00:23:52.590 "transport_retry_count": 4, 00:23:52.590 "bdev_retry_count": 3, 00:23:52.590 "transport_ack_timeout": 0, 00:23:52.590 "ctrlr_loss_timeout_sec": 0, 00:23:52.590 "reconnect_delay_sec": 0, 00:23:52.590 "fast_io_fail_timeout_sec": 0, 00:23:52.590 "disable_auto_failback": false, 00:23:52.590 "generate_uuids": false, 00:23:52.590 "transport_tos": 0, 00:23:52.590 "nvme_error_stat": false, 00:23:52.590 "rdma_srq_size": 0, 00:23:52.590 "io_path_stat": false, 00:23:52.590 "allow_accel_sequence": false, 00:23:52.590 "rdma_max_cq_size": 0, 00:23:52.590 "rdma_cm_event_timeout_ms": 0, 00:23:52.590 "dhchap_digests": [ 00:23:52.590 "sha256", 00:23:52.590 "sha384", 00:23:52.590 "sha512" 00:23:52.590 ], 00:23:52.590 "dhchap_dhgroups": [ 00:23:52.590 "null", 00:23:52.590 "ffdhe2048", 00:23:52.590 "ffdhe3072", 00:23:52.590 "ffdhe4096", 00:23:52.590 "ffdhe6144", 00:23:52.590 "ffdhe8192" 00:23:52.590 ] 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "bdev_nvme_set_hotplug", 00:23:52.590 "params": { 00:23:52.590 "period_us": 100000, 00:23:52.590 "enable": false 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "bdev_malloc_create", 00:23:52.590 "params": { 00:23:52.590 "name": "malloc0", 00:23:52.590 "num_blocks": 8192, 00:23:52.590 "block_size": 4096, 00:23:52.590 "physical_block_size": 4096, 00:23:52.590 "uuid": "a72d1472-59bb-4520-af06-3ff3f5d0338d", 00:23:52.590 "optimal_io_boundary": 0 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "bdev_wait_for_examine" 00:23:52.590 } 00:23:52.590 ] 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "subsystem": "nbd", 00:23:52.590 "config": [] 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "subsystem": "scheduler", 00:23:52.590 "config": [ 00:23:52.590 { 00:23:52.590 "method": "framework_set_scheduler", 00:23:52.590 "params": { 00:23:52.590 "name": "static" 00:23:52.590 } 00:23:52.590 } 00:23:52.590 ] 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "subsystem": "nvmf", 00:23:52.590 "config": [ 00:23:52.590 { 00:23:52.590 "method": "nvmf_set_config", 00:23:52.590 "params": { 00:23:52.590 "discovery_filter": "match_any", 00:23:52.590 "admin_cmd_passthru": { 00:23:52.590 "identify_ctrlr": false 00:23:52.590 } 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "nvmf_set_max_subsystems", 00:23:52.590 "params": { 00:23:52.590 "max_subsystems": 1024 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "nvmf_set_crdt", 00:23:52.590 "params": { 00:23:52.590 "crdt1": 0, 00:23:52.590 "crdt2": 0, 00:23:52.590 "crdt3": 0 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "nvmf_create_transport", 00:23:52.590 "params": { 00:23:52.590 "trtype": "TCP", 00:23:52.590 "max_queue_depth": 128, 00:23:52.590 "max_io_qpairs_per_ctrlr": 127, 00:23:52.590 "in_capsule_data_size": 4096, 00:23:52.590 "max_io_size": 131072, 00:23:52.590 "io_unit_size": 131072, 00:23:52.590 "max_aq_depth": 128, 00:23:52.590 "num_shared_buffers": 511, 00:23:52.590 "buf_cache_size": 4294967295, 00:23:52.590 "dif_insert_or_strip": false, 00:23:52.590 "zcopy": false, 00:23:52.590 "c2h_success": false, 00:23:52.590 "sock_priority": 0, 00:23:52.590 "abort_timeout_sec": 1, 00:23:52.590 "ack_timeout": 0, 00:23:52.590 "data_wr_pool_size": 0 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "nvmf_create_subsystem", 00:23:52.590 "params": { 00:23:52.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.590 "allow_any_host": false, 00:23:52.590 "serial_number": "00000000000000000000", 00:23:52.590 "model_number": "SPDK bdev Controller", 00:23:52.590 "max_namespaces": 32, 00:23:52.590 "min_cntlid": 1, 00:23:52.590 "max_cntlid": 65519, 00:23:52.590 "ana_reporting": false 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "nvmf_subsystem_add_host", 00:23:52.590 "params": { 00:23:52.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.590 "host": "nqn.2016-06.io.spdk:host1", 00:23:52.590 "psk": "key0" 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "nvmf_subsystem_add_ns", 00:23:52.590 "params": { 00:23:52.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.590 "namespace": { 00:23:52.590 "nsid": 1, 00:23:52.590 "bdev_name": "malloc0", 00:23:52.590 "nguid": "A72D147259BB4520AF063FF3F5D0338D", 00:23:52.590 "uuid": "a72d1472-59bb-4520-af06-3ff3f5d0338d", 00:23:52.590 "no_auto_visible": false 00:23:52.590 } 00:23:52.590 } 00:23:52.590 }, 00:23:52.590 { 00:23:52.590 "method": "nvmf_subsystem_add_listener", 00:23:52.590 "params": { 00:23:52.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.590 "listen_address": { 00:23:52.591 "trtype": "TCP", 00:23:52.591 "adrfam": "IPv4", 00:23:52.591 "traddr": "10.0.0.2", 00:23:52.591 "trsvcid": "4420" 00:23:52.591 }, 00:23:52.591 "secure_channel": false, 00:23:52.591 "sock_impl": "ssl" 00:23:52.591 } 00:23:52.591 } 00:23:52.591 ] 00:23:52.591 } 00:23:52.591 ] 00:23:52.591 }' 00:23:52.591 20:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:52.591 20:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.591 20:29:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4096442 00:23:52.591 20:29:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:52.591 20:29:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4096442 00:23:52.591 20:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4096442 ']' 00:23:52.591 20:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.591 20:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.591 20:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.591 20:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.591 20:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.591 [2024-07-15 20:29:31.106629] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:52.591 [2024-07-15 20:29:31.106723] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.849 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.849 [2024-07-15 20:29:31.170101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.849 [2024-07-15 20:29:31.252574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.849 [2024-07-15 20:29:31.252629] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.849 [2024-07-15 20:29:31.252650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.849 [2024-07-15 20:29:31.252676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.849 [2024-07-15 20:29:31.252686] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.849 [2024-07-15 20:29:31.252759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.107 [2024-07-15 20:29:31.497817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.107 [2024-07-15 20:29:31.529830] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.107 [2024-07-15 20:29:31.541128] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=4096590 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 4096590 /var/tmp/bdevperf.sock 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4096590 ']' 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.673 20:29:32 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:23:53.673 "subsystems": [ 00:23:53.673 { 00:23:53.673 "subsystem": "keyring", 00:23:53.673 "config": [ 00:23:53.673 { 00:23:53.673 "method": "keyring_file_add_key", 00:23:53.673 "params": { 00:23:53.673 "name": "key0", 00:23:53.673 "path": "/tmp/tmp.XMWurxuAWE" 00:23:53.673 } 00:23:53.673 } 00:23:53.673 ] 00:23:53.673 }, 00:23:53.673 { 00:23:53.673 "subsystem": "iobuf", 00:23:53.673 "config": [ 00:23:53.673 { 00:23:53.673 "method": "iobuf_set_options", 00:23:53.673 "params": { 00:23:53.673 "small_pool_count": 8192, 00:23:53.673 "large_pool_count": 1024, 00:23:53.673 "small_bufsize": 8192, 00:23:53.673 "large_bufsize": 135168 00:23:53.673 } 00:23:53.673 } 00:23:53.673 ] 00:23:53.673 }, 00:23:53.673 { 00:23:53.673 "subsystem": "sock", 00:23:53.673 "config": [ 00:23:53.673 { 00:23:53.673 "method": "sock_set_default_impl", 00:23:53.673 "params": { 00:23:53.673 "impl_name": "posix" 00:23:53.673 } 00:23:53.673 }, 00:23:53.673 { 00:23:53.673 "method": "sock_impl_set_options", 00:23:53.673 "params": { 00:23:53.673 "impl_name": "ssl", 00:23:53.673 "recv_buf_size": 4096, 00:23:53.673 "send_buf_size": 4096, 00:23:53.673 "enable_recv_pipe": true, 00:23:53.673 "enable_quickack": false, 00:23:53.673 "enable_placement_id": 0, 00:23:53.673 "enable_zerocopy_send_server": true, 00:23:53.673 "enable_zerocopy_send_client": false, 00:23:53.673 "zerocopy_threshold": 0, 00:23:53.673 "tls_version": 0, 00:23:53.673 "enable_ktls": false 00:23:53.673 } 00:23:53.673 }, 00:23:53.673 { 00:23:53.673 "method": "sock_impl_set_options", 00:23:53.673 "params": { 00:23:53.673 "impl_name": "posix", 00:23:53.673 "recv_buf_size": 2097152, 00:23:53.673 "send_buf_size": 2097152, 00:23:53.673 "enable_recv_pipe": true, 00:23:53.673 "enable_quickack": false, 00:23:53.673 "enable_placement_id": 0, 00:23:53.673 "enable_zerocopy_send_server": true, 00:23:53.673 "enable_zerocopy_send_client": false, 00:23:53.673 "zerocopy_threshold": 0, 00:23:53.673 "tls_version": 0, 00:23:53.673 "enable_ktls": false 00:23:53.673 } 00:23:53.673 } 00:23:53.673 ] 00:23:53.673 }, 00:23:53.673 { 00:23:53.673 "subsystem": "vmd", 00:23:53.673 "config": [] 00:23:53.673 }, 00:23:53.673 { 00:23:53.673 "subsystem": "accel", 00:23:53.673 "config": [ 00:23:53.673 { 00:23:53.673 "method": "accel_set_options", 00:23:53.673 "params": { 00:23:53.673 "small_cache_size": 128, 00:23:53.673 "large_cache_size": 16, 00:23:53.673 "task_count": 2048, 00:23:53.673 "sequence_count": 2048, 00:23:53.673 "buf_count": 2048 00:23:53.673 } 00:23:53.673 } 00:23:53.673 ] 00:23:53.673 }, 00:23:53.673 { 00:23:53.673 "subsystem": "bdev", 00:23:53.673 "config": [ 00:23:53.673 { 00:23:53.673 "method": "bdev_set_options", 00:23:53.673 "params": { 00:23:53.673 "bdev_io_pool_size": 65535, 00:23:53.673 "bdev_io_cache_size": 256, 00:23:53.673 "bdev_auto_examine": true, 00:23:53.673 "iobuf_small_cache_size": 128, 00:23:53.673 "iobuf_large_cache_size": 16 00:23:53.673 } 00:23:53.673 }, 00:23:53.673 { 00:23:53.673 "method": "bdev_raid_set_options", 00:23:53.673 "params": { 00:23:53.673 "process_window_size_kb": 1024 00:23:53.673 } 00:23:53.673 }, 00:23:53.673 { 00:23:53.673 "method": "bdev_iscsi_set_options", 00:23:53.673 "params": { 00:23:53.673 "timeout_sec": 30 00:23:53.673 } 00:23:53.673 }, 00:23:53.673 { 00:23:53.673 "method": "bdev_nvme_set_options", 00:23:53.673 "params": { 00:23:53.673 "action_on_timeout": "none", 00:23:53.673 "timeout_us": 0, 00:23:53.673 "timeout_admin_us": 0, 00:23:53.673 "keep_alive_timeout_ms": 10000, 00:23:53.673 "arbitration_burst": 0, 00:23:53.673 "low_priority_weight": 0, 00:23:53.673 "medium_priority_weight": 0, 00:23:53.673 "high_priority_weight": 0, 00:23:53.673 "nvme_adminq_poll_period_us": 10000, 00:23:53.673 "nvme_ioq_poll_period_us": 0, 00:23:53.673 "io_queue_requests": 512, 00:23:53.673 "delay_cmd_submit": true, 00:23:53.673 "transport_retry_count": 4, 00:23:53.673 "bdev_retry_count": 3, 00:23:53.674 "transport_ack_timeout": 0, 00:23:53.674 "ctrlr_loss_timeout_sec": 0, 00:23:53.674 "reconnect_delay_sec": 0, 00:23:53.674 "fast_io_fail_timeout_sec": 0, 00:23:53.674 "disable_auto_failback": false, 00:23:53.674 "generate_uuids": false, 00:23:53.674 "transport_tos": 0, 00:23:53.674 "nvme_error_stat": false, 00:23:53.674 "rdma_srq_size": 0, 00:23:53.674 "io_path_stat": false, 00:23:53.674 "allow_accel_sequence": false, 00:23:53.674 "rdma_max_cq_size": 0, 00:23:53.674 "rdma_cm_event_timeout_ms": 0, 00:23:53.674 "dhchap_digests": [ 00:23:53.674 "sha256", 00:23:53.674 "sha384", 00:23:53.674 "sha512" 00:23:53.674 ], 00:23:53.674 "dhchap_dhgroups": [ 00:23:53.674 "null", 00:23:53.674 "ffdhe2048", 00:23:53.674 "ffdhe3072", 00:23:53.674 "ffdhe4096", 00:23:53.674 "ffdhe6144", 00:23:53.674 "ffdhe8192" 00:23:53.674 ] 00:23:53.674 } 00:23:53.674 }, 00:23:53.674 { 00:23:53.674 "method": "bdev_nvme_attach_controller", 00:23:53.674 "params": { 00:23:53.674 "name": "nvme0", 00:23:53.674 "trtype": "TCP", 00:23:53.674 "adrfam": "IPv4", 00:23:53.674 "traddr": "10.0.0.2", 00:23:53.674 "trsvcid": "4420", 00:23:53.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.674 "prchk_reftag": false, 00:23:53.674 "prchk_guard": false, 00:23:53.674 "ctrlr_loss_timeout_sec": 0, 00:23:53.674 "reconnect_delay_sec": 0, 00:23:53.674 "fast_io_fail_timeout_sec": 0, 00:23:53.674 "psk": "key0", 00:23:53.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.674 "hdgst": false, 00:23:53.674 "ddgst": false 00:23:53.674 } 00:23:53.674 }, 00:23:53.674 { 00:23:53.674 "method": "bdev_nvme_set_hotplug", 00:23:53.674 "params": { 00:23:53.674 "period_us": 100000, 00:23:53.674 "enable": false 00:23:53.674 } 00:23:53.674 }, 00:23:53.674 { 00:23:53.674 "method": "bdev_enable_histogram", 00:23:53.674 "params": { 00:23:53.674 "name": "nvme0n1", 00:23:53.674 "enable": true 00:23:53.674 } 00:23:53.674 }, 00:23:53.674 { 00:23:53.674 "method": "bdev_wait_for_examine" 00:23:53.674 } 00:23:53.674 ] 00:23:53.674 }, 00:23:53.674 { 00:23:53.674 "subsystem": "nbd", 00:23:53.674 "config": [] 00:23:53.674 } 00:23:53.674 ] 00:23:53.674 }' 00:23:53.674 20:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.674 [2024-07-15 20:29:32.150195] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:23:53.674 [2024-07-15 20:29:32.150286] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096590 ] 00:23:53.674 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.932 [2024-07-15 20:29:32.210611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.932 [2024-07-15 20:29:32.300408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.190 [2024-07-15 20:29:32.474445] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.757 20:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.757 20:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:54.757 20:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:54.757 20:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:23:55.017 20:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.017 20:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:55.017 Running I/O for 1 seconds... 00:23:56.391 00:23:56.391 Latency(us) 00:23:56.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.391 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:56.391 Verification LBA range: start 0x0 length 0x2000 00:23:56.391 nvme0n1 : 1.09 1659.97 6.48 0.00 0.00 74674.09 6553.60 103304.15 00:23:56.391 =================================================================================================================== 00:23:56.391 Total : 1659.97 6.48 0.00 0.00 74674.09 6553.60 103304.15 00:23:56.391 0 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:56.391 nvmf_trace.0 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 4096590 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4096590 ']' 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4096590 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096590 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096590' 00:23:56.391 killing process with pid 4096590 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4096590 00:23:56.391 Received shutdown signal, test time was about 1.000000 seconds 00:23:56.391 00:23:56.391 Latency(us) 00:23:56.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.391 =================================================================================================================== 00:23:56.391 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4096590 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:56.391 20:29:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:56.391 rmmod nvme_tcp 00:23:56.649 rmmod nvme_fabrics 00:23:56.649 rmmod nvme_keyring 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 4096442 ']' 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 4096442 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4096442 ']' 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4096442 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096442 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096442' 00:23:56.649 killing process with pid 4096442 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4096442 00:23:56.649 20:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4096442 00:23:56.907 20:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:56.908 20:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:56.908 20:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:56.908 20:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:56.908 20:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:56.908 20:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.908 20:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.908 20:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.811 20:29:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.811 20:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.lyydWFMRuo /tmp/tmp.xU2RvmQn68 /tmp/tmp.XMWurxuAWE 00:23:58.811 00:23:58.811 real 1m19.285s 00:23:58.811 user 2m4.478s 00:23:58.811 sys 0m28.823s 00:23:58.811 20:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:58.811 20:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.811 ************************************ 00:23:58.811 END TEST nvmf_tls 00:23:58.811 ************************************ 00:23:58.811 20:29:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:58.811 20:29:37 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:58.811 20:29:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:58.811 20:29:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:58.811 20:29:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:58.811 ************************************ 00:23:58.811 START TEST nvmf_fips 00:23:58.811 ************************************ 00:23:58.811 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:59.070 * Looking for test storage... 00:23:59.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:59.070 Error setting digest 00:23:59.070 00C26C6A0A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:59.070 00C26C6A0A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.070 20:29:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:01.027 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.027 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.027 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.027 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:01.028 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:01.028 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:01.028 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:01.028 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.028 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.285 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.285 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.285 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.285 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.285 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.285 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.285 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:24:01.285 00:24:01.285 --- 10.0.0.2 ping statistics --- 00:24:01.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.285 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:24:01.285 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:24:01.285 00:24:01.285 --- 10.0.0.1 ping statistics --- 00:24:01.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.285 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:24:01.285 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.285 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:01.285 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.285 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.285 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=4098829 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 4098829 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 4098829 ']' 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.286 20:29:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:01.286 [2024-07-15 20:29:39.740675] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:24:01.286 [2024-07-15 20:29:39.740763] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.286 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.286 [2024-07-15 20:29:39.804048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.542 [2024-07-15 20:29:39.893701] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.542 [2024-07-15 20:29:39.893760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.542 [2024-07-15 20:29:39.893773] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.542 [2024-07-15 20:29:39.893784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.542 [2024-07-15 20:29:39.893793] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.542 [2024-07-15 20:29:39.893817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:01.542 20:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:01.798 [2024-07-15 20:29:40.276109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.798 [2024-07-15 20:29:40.292070] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.798 [2024-07-15 20:29:40.292320] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.798 [2024-07-15 20:29:40.324589] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:02.054 malloc0 00:24:02.054 20:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:02.054 20:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=4098982 00:24:02.054 20:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 4098982 /var/tmp/bdevperf.sock 00:24:02.054 20:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 4098982 ']' 00:24:02.054 20:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.054 20:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.054 20:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:02.054 20:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.054 20:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.054 20:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:02.054 [2024-07-15 20:29:40.417792] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:24:02.054 [2024-07-15 20:29:40.417896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098982 ] 00:24:02.054 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.054 [2024-07-15 20:29:40.475640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.054 [2024-07-15 20:29:40.566995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.310 20:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.310 20:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:02.310 20:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:02.567 [2024-07-15 20:29:40.903495] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.567 [2024-07-15 20:29:40.903607] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:02.567 TLSTESTn1 00:24:02.567 20:29:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.824 Running I/O for 10 seconds... 00:24:12.816 00:24:12.816 Latency(us) 00:24:12.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.816 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:12.816 Verification LBA range: start 0x0 length 0x2000 00:24:12.816 TLSTESTn1 : 10.06 1736.71 6.78 0.00 0.00 73475.55 6213.78 104857.60 00:24:12.816 =================================================================================================================== 00:24:12.816 Total : 1736.71 6.78 0.00 0.00 73475.55 6213.78 104857.60 00:24:12.816 0 00:24:12.816 20:29:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:12.816 20:29:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:12.816 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:12.816 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:12.816 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:12.816 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:12.816 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:12.816 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:12.816 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:12.816 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:12.816 nvmf_trace.0 00:24:12.817 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:12.817 20:29:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4098982 00:24:12.817 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 4098982 ']' 00:24:12.817 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 4098982 00:24:12.817 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:12.817 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.817 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4098982 00:24:13.076 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:13.076 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:13.076 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4098982' 00:24:13.076 killing process with pid 4098982 00:24:13.076 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 4098982 00:24:13.076 Received shutdown signal, test time was about 10.000000 seconds 00:24:13.076 00:24:13.076 Latency(us) 00:24:13.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.076 =================================================================================================================== 00:24:13.076 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.076 [2024-07-15 20:29:51.333109] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:13.076 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 4098982 00:24:13.076 20:29:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:13.076 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:13.076 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:13.076 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:13.076 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:13.076 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:13.076 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:13.076 rmmod nvme_tcp 00:24:13.076 rmmod nvme_fabrics 00:24:13.076 rmmod nvme_keyring 00:24:13.076 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 4098829 ']' 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 4098829 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 4098829 ']' 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 4098829 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4098829 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4098829' 00:24:13.334 killing process with pid 4098829 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 4098829 00:24:13.334 [2024-07-15 20:29:51.639146] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:13.334 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 4098829 00:24:13.592 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:13.592 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:13.592 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:13.592 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:13.592 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:13.592 20:29:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.592 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.592 20:29:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.493 20:29:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:15.493 20:29:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:15.493 00:24:15.493 real 0m16.597s 00:24:15.493 user 0m20.322s 00:24:15.493 sys 0m6.631s 00:24:15.493 20:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:15.493 20:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:15.493 ************************************ 00:24:15.493 END TEST nvmf_fips 00:24:15.493 ************************************ 00:24:15.493 20:29:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:15.493 20:29:53 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:15.493 20:29:53 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:15.493 20:29:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:15.493 20:29:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:15.493 20:29:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:15.493 ************************************ 00:24:15.493 START TEST nvmf_fuzz 00:24:15.493 ************************************ 00:24:15.493 20:29:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:15.493 * Looking for test storage... 00:24:15.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.493 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.494 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.494 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:15.494 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:15.494 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:15.752 20:29:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:15.752 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:15.752 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.752 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:15.752 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:15.752 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:15.752 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.752 20:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.753 20:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.753 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:15.753 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:15.753 20:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:15.753 20:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:17.655 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:17.655 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:17.655 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:17.656 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:17.656 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.656 20:29:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:17.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:24:17.656 00:24:17.656 --- 10.0.0.2 ping statistics --- 00:24:17.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.656 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:24:17.656 00:24:17.656 --- 10.0.0.1 ping statistics --- 00:24:17.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.656 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4102219 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4102219 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 4102219 ']' 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.656 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.914 Malloc0 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.914 20:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.915 20:29:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:17.915 20:29:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:49.979 Fuzzing completed. Shutting down the fuzz application 00:24:49.979 00:24:49.979 Dumping successful admin opcodes: 00:24:49.979 8, 9, 10, 24, 00:24:49.979 Dumping successful io opcodes: 00:24:49.979 0, 9, 00:24:49.979 NS: 0x200003aeff00 I/O qp, Total commands completed: 463397, total successful commands: 2680, random_seed: 3220490624 00:24:49.979 NS: 0x200003aeff00 admin qp, Total commands completed: 57168, total successful commands: 455, random_seed: 1197459456 00:24:49.979 20:30:27 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:50.237 Fuzzing completed. Shutting down the fuzz application 00:24:50.237 00:24:50.237 Dumping successful admin opcodes: 00:24:50.237 24, 00:24:50.237 Dumping successful io opcodes: 00:24:50.237 00:24:50.237 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3123457898 00:24:50.237 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3123622251 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:50.237 rmmod nvme_tcp 00:24:50.237 rmmod nvme_fabrics 00:24:50.237 rmmod nvme_keyring 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 4102219 ']' 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 4102219 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 4102219 ']' 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 4102219 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4102219 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4102219' 00:24:50.237 killing process with pid 4102219 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 4102219 00:24:50.237 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 4102219 00:24:50.496 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.496 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.496 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.496 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.496 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.496 20:30:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.496 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.496 20:30:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.033 20:30:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:53.033 20:30:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:53.033 00:24:53.033 real 0m37.087s 00:24:53.033 user 0m51.508s 00:24:53.033 sys 0m15.010s 00:24:53.033 20:30:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:53.033 20:30:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.033 ************************************ 00:24:53.033 END TEST nvmf_fuzz 00:24:53.033 ************************************ 00:24:53.033 20:30:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:53.033 20:30:31 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:53.033 20:30:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:53.033 20:30:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.033 20:30:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.033 ************************************ 00:24:53.033 START TEST nvmf_multiconnection 00:24:53.033 ************************************ 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:53.033 * Looking for test storage... 00:24:53.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:53.033 20:30:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.936 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.936 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:54.936 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:54.936 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:54.936 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:54.936 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:54.936 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:54.936 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:54.936 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:54.936 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:54.936 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:54.937 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:54.937 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:54.937 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:54.937 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:54.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:24:54.937 00:24:54.937 --- 10.0.0.2 ping statistics --- 00:24:54.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.937 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:24:54.937 00:24:54.937 --- 10.0.0.1 ping statistics --- 00:24:54.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.937 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=4108463 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 4108463 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 4108463 ']' 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.937 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.937 [2024-07-15 20:30:33.309883] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:24:54.937 [2024-07-15 20:30:33.309975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.937 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.937 [2024-07-15 20:30:33.379397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.196 [2024-07-15 20:30:33.474911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.196 [2024-07-15 20:30:33.474964] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.196 [2024-07-15 20:30:33.474978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.196 [2024-07-15 20:30:33.474990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.196 [2024-07-15 20:30:33.475000] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.196 [2024-07-15 20:30:33.475065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.196 [2024-07-15 20:30:33.475096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.196 [2024-07-15 20:30:33.475150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.196 [2024-07-15 20:30:33.475152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.196 [2024-07-15 20:30:33.623793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.196 Malloc1 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.196 [2024-07-15 20:30:33.680763] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.196 Malloc2 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.196 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.455 Malloc3 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.455 Malloc4 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.455 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 Malloc5 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 Malloc6 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 Malloc7 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.456 20:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.715 Malloc8 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.715 Malloc9 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.715 Malloc10 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.715 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.715 Malloc11 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.716 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:56.649 20:30:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:56.649 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:56.649 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:56.649 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:56.649 20:30:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:58.580 20:30:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:58.580 20:30:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:58.580 20:30:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:58.581 20:30:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:58.581 20:30:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:58.581 20:30:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:58.581 20:30:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.581 20:30:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:59.147 20:30:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:59.147 20:30:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:59.147 20:30:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:59.147 20:30:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:59.147 20:30:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:01.671 20:30:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:01.671 20:30:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:01.671 20:30:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:01.671 20:30:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:01.671 20:30:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:01.671 20:30:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:01.671 20:30:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.671 20:30:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:01.928 20:30:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:01.928 20:30:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:01.928 20:30:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.928 20:30:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:01.928 20:30:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:03.822 20:30:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:03.822 20:30:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:03.822 20:30:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:03.822 20:30:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:03.822 20:30:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.822 20:30:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:03.822 20:30:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.822 20:30:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:04.753 20:30:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:04.753 20:30:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.753 20:30:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.753 20:30:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:04.753 20:30:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:06.647 20:30:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:06.647 20:30:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:06.647 20:30:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:06.647 20:30:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:06.647 20:30:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.647 20:30:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:06.647 20:30:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.647 20:30:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:07.581 20:30:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:07.581 20:30:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.581 20:30:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.581 20:30:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:07.581 20:30:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:09.479 20:30:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:09.479 20:30:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:09.479 20:30:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:09.479 20:30:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:09.479 20:30:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.479 20:30:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:09.479 20:30:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.479 20:30:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:10.411 20:30:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:10.411 20:30:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:10.411 20:30:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:10.411 20:30:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:10.411 20:30:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:12.306 20:30:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:12.306 20:30:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:12.306 20:30:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:12.306 20:30:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:12.306 20:30:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.306 20:30:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:12.306 20:30:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.306 20:30:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:13.239 20:30:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:13.239 20:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:13.239 20:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:13.239 20:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:13.239 20:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:15.135 20:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:15.135 20:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:15.135 20:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:15.135 20:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:15.135 20:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:15.135 20:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:15.135 20:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.135 20:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:15.699 20:30:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:15.699 20:30:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:15.699 20:30:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.699 20:30:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:15.699 20:30:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:18.231 20:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:18.231 20:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:18.231 20:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:18.231 20:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:18.231 20:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:18.231 20:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:18.231 20:30:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.231 20:30:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:18.488 20:30:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:18.488 20:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:18.488 20:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.488 20:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:18.488 20:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:20.386 20:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:20.386 20:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:20.386 20:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:20.386 20:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:20.386 20:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:20.386 20:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:20.386 20:30:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.386 20:30:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:21.318 20:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:21.318 20:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.318 20:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:21.318 20:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:21.318 20:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:23.840 20:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:23.840 20:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:23.840 20:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:23.840 20:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:23.840 20:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:23.840 20:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:23.840 20:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.840 20:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:24.406 20:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:24.406 20:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:24.406 20:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:24.406 20:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:24.406 20:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:26.303 20:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:26.303 20:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:26.303 20:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:26.303 20:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:26.303 20:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:26.303 20:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:26.303 20:31:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:26.303 [global] 00:25:26.303 thread=1 00:25:26.303 invalidate=1 00:25:26.303 rw=read 00:25:26.303 time_based=1 00:25:26.303 runtime=10 00:25:26.303 ioengine=libaio 00:25:26.303 direct=1 00:25:26.303 bs=262144 00:25:26.303 iodepth=64 00:25:26.303 norandommap=1 00:25:26.303 numjobs=1 00:25:26.303 00:25:26.303 [job0] 00:25:26.303 filename=/dev/nvme0n1 00:25:26.303 [job1] 00:25:26.303 filename=/dev/nvme10n1 00:25:26.303 [job2] 00:25:26.303 filename=/dev/nvme1n1 00:25:26.303 [job3] 00:25:26.303 filename=/dev/nvme2n1 00:25:26.303 [job4] 00:25:26.303 filename=/dev/nvme3n1 00:25:26.303 [job5] 00:25:26.303 filename=/dev/nvme4n1 00:25:26.303 [job6] 00:25:26.303 filename=/dev/nvme5n1 00:25:26.303 [job7] 00:25:26.303 filename=/dev/nvme6n1 00:25:26.303 [job8] 00:25:26.303 filename=/dev/nvme7n1 00:25:26.303 [job9] 00:25:26.303 filename=/dev/nvme8n1 00:25:26.303 [job10] 00:25:26.303 filename=/dev/nvme9n1 00:25:26.303 Could not set queue depth (nvme0n1) 00:25:26.303 Could not set queue depth (nvme10n1) 00:25:26.303 Could not set queue depth (nvme1n1) 00:25:26.303 Could not set queue depth (nvme2n1) 00:25:26.303 Could not set queue depth (nvme3n1) 00:25:26.303 Could not set queue depth (nvme4n1) 00:25:26.303 Could not set queue depth (nvme5n1) 00:25:26.303 Could not set queue depth (nvme6n1) 00:25:26.303 Could not set queue depth (nvme7n1) 00:25:26.303 Could not set queue depth (nvme8n1) 00:25:26.303 Could not set queue depth (nvme9n1) 00:25:26.561 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.561 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.561 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.561 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.561 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.561 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.561 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.561 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.561 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.561 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.561 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.561 fio-3.35 00:25:26.561 Starting 11 threads 00:25:38.771 00:25:38.771 job0: (groupid=0, jobs=1): err= 0: pid=4112707: Mon Jul 15 20:31:15 2024 00:25:38.771 read: IOPS=738, BW=185MiB/s (194MB/s)(1869MiB/10124msec) 00:25:38.771 slat (usec): min=9, max=140536, avg=891.20, stdev=5284.24 00:25:38.771 clat (usec): min=1327, max=608354, avg=85687.74, stdev=83304.22 00:25:38.771 lat (usec): min=1363, max=608410, avg=86578.94, stdev=84390.15 00:25:38.771 clat percentiles (msec): 00:25:38.771 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 21], 20.00th=[ 38], 00:25:38.771 | 30.00th=[ 41], 40.00th=[ 47], 50.00th=[ 59], 60.00th=[ 71], 00:25:38.771 | 70.00th=[ 91], 80.00th=[ 112], 90.00th=[ 186], 95.00th=[ 257], 00:25:38.771 | 99.00th=[ 447], 99.50th=[ 485], 99.90th=[ 550], 99.95th=[ 558], 00:25:38.771 | 99.99th=[ 609] 00:25:38.771 bw ( KiB/s): min=33792, max=400896, per=11.14%, avg=189798.40, stdev=100364.39, samples=20 00:25:38.771 iops : min= 132, max= 1566, avg=741.40, stdev=392.05, samples=20 00:25:38.771 lat (msec) : 2=0.44%, 4=1.97%, 10=2.69%, 20=4.55%, 50=33.17% 00:25:38.771 lat (msec) : 100=32.90%, 250=18.42%, 500=5.59%, 750=0.28% 00:25:38.771 cpu : usr=0.39%, sys=2.39%, ctx=1854, majf=0, minf=4097 00:25:38.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:38.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.771 issued rwts: total=7477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.771 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.771 job1: (groupid=0, jobs=1): err= 0: pid=4112708: Mon Jul 15 20:31:15 2024 00:25:38.771 read: IOPS=572, BW=143MiB/s (150MB/s)(1441MiB/10063msec) 00:25:38.771 slat (usec): min=10, max=68167, avg=1391.76, stdev=4198.63 00:25:38.771 clat (msec): min=3, max=296, avg=110.30, stdev=45.63 00:25:38.771 lat (msec): min=3, max=296, avg=111.69, stdev=46.24 00:25:38.771 clat percentiles (msec): 00:25:38.771 | 1.00th=[ 12], 5.00th=[ 48], 10.00th=[ 63], 20.00th=[ 72], 00:25:38.771 | 30.00th=[ 81], 40.00th=[ 92], 50.00th=[ 107], 60.00th=[ 118], 00:25:38.771 | 70.00th=[ 132], 80.00th=[ 148], 90.00th=[ 171], 95.00th=[ 184], 00:25:38.771 | 99.00th=[ 245], 99.50th=[ 257], 99.90th=[ 268], 99.95th=[ 284], 00:25:38.771 | 99.99th=[ 296] 00:25:38.771 bw ( KiB/s): min=80896, max=246784, per=8.57%, avg=145894.40, stdev=52231.52, samples=20 00:25:38.771 iops : min= 316, max= 964, avg=569.90, stdev=204.03, samples=20 00:25:38.771 lat (msec) : 4=0.05%, 10=0.85%, 20=0.87%, 50=3.90%, 100=38.96% 00:25:38.771 lat (msec) : 250=54.48%, 500=0.89% 00:25:38.771 cpu : usr=0.38%, sys=2.03%, ctx=1479, majf=0, minf=4097 00:25:38.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:38.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.771 issued rwts: total=5762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.771 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.771 job2: (groupid=0, jobs=1): err= 0: pid=4112709: Mon Jul 15 20:31:15 2024 00:25:38.771 read: IOPS=761, BW=190MiB/s (200MB/s)(1913MiB/10052msec) 00:25:38.771 slat (usec): min=10, max=153735, avg=1110.58, stdev=4169.30 00:25:38.771 clat (msec): min=28, max=279, avg=82.89, stdev=51.04 00:25:38.771 lat (msec): min=28, max=390, avg=84.00, stdev=51.64 00:25:38.771 clat percentiles (msec): 00:25:38.771 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 39], 00:25:38.771 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 68], 60.00th=[ 82], 00:25:38.771 | 70.00th=[ 94], 80.00th=[ 113], 90.00th=[ 169], 95.00th=[ 192], 00:25:38.771 | 99.00th=[ 245], 99.50th=[ 257], 99.90th=[ 275], 99.95th=[ 275], 00:25:38.771 | 99.99th=[ 279] 00:25:38.771 bw ( KiB/s): min=77824, max=458240, per=11.41%, avg=194304.00, stdev=110152.92, samples=20 00:25:38.771 iops : min= 304, max= 1790, avg=759.00, stdev=430.28, samples=20 00:25:38.771 lat (msec) : 50=31.41%, 100=43.28%, 250=24.68%, 500=0.63% 00:25:38.771 cpu : usr=0.44%, sys=2.70%, ctx=1712, majf=0, minf=4097 00:25:38.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:38.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.771 issued rwts: total=7653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.771 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.771 job3: (groupid=0, jobs=1): err= 0: pid=4112710: Mon Jul 15 20:31:15 2024 00:25:38.771 read: IOPS=526, BW=132MiB/s (138MB/s)(1333MiB/10131msec) 00:25:38.771 slat (usec): min=10, max=193482, avg=1524.94, stdev=6090.82 00:25:38.771 clat (msec): min=2, max=565, avg=119.97, stdev=78.06 00:25:38.771 lat (msec): min=6, max=565, avg=121.50, stdev=78.77 00:25:38.771 clat percentiles (msec): 00:25:38.771 | 1.00th=[ 21], 5.00th=[ 42], 10.00th=[ 49], 20.00th=[ 58], 00:25:38.771 | 30.00th=[ 80], 40.00th=[ 93], 50.00th=[ 101], 60.00th=[ 114], 00:25:38.771 | 70.00th=[ 140], 80.00th=[ 171], 90.00th=[ 194], 95.00th=[ 234], 00:25:38.771 | 99.00th=[ 485], 99.50th=[ 527], 99.90th=[ 550], 99.95th=[ 558], 00:25:38.771 | 99.99th=[ 567] 00:25:38.771 bw ( KiB/s): min=41984, max=238080, per=7.92%, avg=134900.35, stdev=63405.21, samples=20 00:25:38.771 iops : min= 164, max= 930, avg=526.95, stdev=247.68, samples=20 00:25:38.771 lat (msec) : 4=0.02%, 10=0.23%, 20=0.68%, 50=10.99%, 100=37.90% 00:25:38.771 lat (msec) : 250=45.58%, 500=3.99%, 750=0.62% 00:25:38.771 cpu : usr=0.32%, sys=1.83%, ctx=1305, majf=0, minf=4097 00:25:38.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:38.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.771 issued rwts: total=5333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.771 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.771 job4: (groupid=0, jobs=1): err= 0: pid=4112711: Mon Jul 15 20:31:15 2024 00:25:38.771 read: IOPS=459, BW=115MiB/s (121MB/s)(1164MiB/10121msec) 00:25:38.771 slat (usec): min=9, max=190965, avg=1478.56, stdev=8170.38 00:25:38.771 clat (usec): min=1202, max=633969, avg=137600.53, stdev=94943.65 00:25:38.771 lat (usec): min=1233, max=637008, avg=139079.09, stdev=96393.38 00:25:38.771 clat percentiles (msec): 00:25:38.771 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 25], 20.00th=[ 57], 00:25:38.771 | 30.00th=[ 94], 40.00th=[ 112], 50.00th=[ 127], 60.00th=[ 146], 00:25:38.771 | 70.00th=[ 171], 80.00th=[ 190], 90.00th=[ 239], 95.00th=[ 317], 00:25:38.771 | 99.00th=[ 481], 99.50th=[ 514], 99.90th=[ 531], 99.95th=[ 575], 00:25:38.771 | 99.99th=[ 634] 00:25:38.771 bw ( KiB/s): min=31232, max=204800, per=6.90%, avg=117529.60, stdev=47774.80, samples=20 00:25:38.771 iops : min= 122, max= 800, avg=459.10, stdev=186.62, samples=20 00:25:38.771 lat (msec) : 2=0.02%, 4=0.52%, 10=2.79%, 20=4.98%, 50=10.18% 00:25:38.771 lat (msec) : 100=14.05%, 250=58.42%, 500=8.27%, 750=0.75% 00:25:38.771 cpu : usr=0.32%, sys=1.33%, ctx=1270, majf=0, minf=4097 00:25:38.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:38.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.771 issued rwts: total=4654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.771 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.771 job5: (groupid=0, jobs=1): err= 0: pid=4112712: Mon Jul 15 20:31:15 2024 00:25:38.771 read: IOPS=607, BW=152MiB/s (159MB/s)(1527MiB/10060msec) 00:25:38.771 slat (usec): min=9, max=83197, avg=1432.76, stdev=4488.47 00:25:38.771 clat (msec): min=10, max=492, avg=103.88, stdev=43.72 00:25:38.771 lat (msec): min=10, max=492, avg=105.31, stdev=44.39 00:25:38.771 clat percentiles (msec): 00:25:38.771 | 1.00th=[ 33], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 67], 00:25:38.771 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 94], 60.00th=[ 108], 00:25:38.771 | 70.00th=[ 123], 80.00th=[ 138], 90.00th=[ 161], 95.00th=[ 180], 00:25:38.771 | 99.00th=[ 259], 99.50th=[ 268], 99.90th=[ 279], 99.95th=[ 288], 00:25:38.771 | 99.99th=[ 493] 00:25:38.771 bw ( KiB/s): min=77824, max=247808, per=9.09%, avg=154789.65, stdev=53500.69, samples=20 00:25:38.771 iops : min= 304, max= 968, avg=604.60, stdev=209.02, samples=20 00:25:38.771 lat (msec) : 20=0.16%, 50=1.96%, 100=52.35%, 250=44.28%, 500=1.24% 00:25:38.771 cpu : usr=0.44%, sys=2.12%, ctx=1415, majf=0, minf=4097 00:25:38.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:38.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.771 issued rwts: total=6109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.771 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.771 job6: (groupid=0, jobs=1): err= 0: pid=4112713: Mon Jul 15 20:31:15 2024 00:25:38.771 read: IOPS=527, BW=132MiB/s (138MB/s)(1333MiB/10119msec) 00:25:38.771 slat (usec): min=9, max=281928, avg=998.64, stdev=7708.59 00:25:38.771 clat (usec): min=1288, max=754489, avg=120311.50, stdev=91040.11 00:25:38.771 lat (usec): min=1317, max=754510, avg=121310.14, stdev=92448.86 00:25:38.771 clat percentiles (msec): 00:25:38.771 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 25], 20.00th=[ 51], 00:25:38.771 | 30.00th=[ 70], 40.00th=[ 94], 50.00th=[ 110], 60.00th=[ 124], 00:25:38.771 | 70.00th=[ 138], 80.00th=[ 163], 90.00th=[ 222], 95.00th=[ 317], 00:25:38.771 | 99.00th=[ 468], 99.50th=[ 481], 99.90th=[ 617], 99.95th=[ 751], 00:25:38.771 | 99.99th=[ 751] 00:25:38.771 bw ( KiB/s): min=32256, max=248832, per=7.92%, avg=134912.00, stdev=56572.62, samples=20 00:25:38.771 iops : min= 126, max= 972, avg=527.00, stdev=220.99, samples=20 00:25:38.771 lat (msec) : 2=0.09%, 4=0.15%, 10=3.92%, 20=3.92%, 50=11.66% 00:25:38.771 lat (msec) : 100=23.96%, 250=49.17%, 500=6.88%, 750=0.19%, 1000=0.06% 00:25:38.771 cpu : usr=0.18%, sys=1.60%, ctx=1697, majf=0, minf=3721 00:25:38.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:38.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.771 issued rwts: total=5333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.771 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.771 job7: (groupid=0, jobs=1): err= 0: pid=4112714: Mon Jul 15 20:31:15 2024 00:25:38.771 read: IOPS=543, BW=136MiB/s (142MB/s)(1365MiB/10054msec) 00:25:38.772 slat (usec): min=11, max=353448, avg=1581.58, stdev=7864.15 00:25:38.772 clat (msec): min=3, max=795, avg=116.14, stdev=76.73 00:25:38.772 lat (msec): min=3, max=795, avg=117.73, stdev=77.99 00:25:38.772 clat percentiles (msec): 00:25:38.772 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 53], 20.00th=[ 69], 00:25:38.772 | 30.00th=[ 79], 40.00th=[ 90], 50.00th=[ 102], 60.00th=[ 115], 00:25:38.772 | 70.00th=[ 129], 80.00th=[ 144], 90.00th=[ 171], 95.00th=[ 253], 00:25:38.772 | 99.00th=[ 502], 99.50th=[ 542], 99.90th=[ 558], 99.95th=[ 676], 00:25:38.772 | 99.99th=[ 793] 00:25:38.772 bw ( KiB/s): min=31232, max=228352, per=8.11%, avg=138188.80, stdev=55254.29, samples=20 00:25:38.772 iops : min= 122, max= 892, avg=539.80, stdev=215.84, samples=20 00:25:38.772 lat (msec) : 4=0.04%, 10=0.24%, 20=1.46%, 50=7.16%, 100=39.61% 00:25:38.772 lat (msec) : 250=46.26%, 500=4.21%, 750=1.01%, 1000=0.02% 00:25:38.772 cpu : usr=0.33%, sys=1.98%, ctx=1344, majf=0, minf=4097 00:25:38.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:38.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.772 issued rwts: total=5461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.772 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.772 job8: (groupid=0, jobs=1): err= 0: pid=4112729: Mon Jul 15 20:31:15 2024 00:25:38.772 read: IOPS=681, BW=170MiB/s (179MB/s)(1706MiB/10017msec) 00:25:38.772 slat (usec): min=9, max=118731, avg=1108.15, stdev=4016.58 00:25:38.772 clat (usec): min=1984, max=465541, avg=92801.11, stdev=60984.71 00:25:38.772 lat (msec): min=2, max=465, avg=93.91, stdev=61.38 00:25:38.772 clat percentiles (msec): 00:25:38.772 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 43], 20.00th=[ 54], 00:25:38.772 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 83], 00:25:38.772 | 70.00th=[ 101], 80.00th=[ 127], 90.00th=[ 174], 95.00th=[ 211], 00:25:38.772 | 99.00th=[ 330], 99.50th=[ 355], 99.90th=[ 435], 99.95th=[ 443], 00:25:38.772 | 99.99th=[ 464] 00:25:38.772 bw ( KiB/s): min=53248, max=300032, per=10.16%, avg=173037.05, stdev=69063.88, samples=20 00:25:38.772 iops : min= 208, max= 1172, avg=675.90, stdev=269.82, samples=20 00:25:38.772 lat (msec) : 2=0.01%, 4=0.13%, 10=0.66%, 20=2.83%, 50=13.66% 00:25:38.772 lat (msec) : 100=52.73%, 250=26.99%, 500=2.99% 00:25:38.772 cpu : usr=0.33%, sys=2.33%, ctx=1561, majf=0, minf=4097 00:25:38.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:38.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.772 issued rwts: total=6822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.772 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.772 job9: (groupid=0, jobs=1): err= 0: pid=4112743: Mon Jul 15 20:31:15 2024 00:25:38.772 read: IOPS=604, BW=151MiB/s (158MB/s)(1519MiB/10050msec) 00:25:38.772 slat (usec): min=9, max=141991, avg=745.44, stdev=4607.00 00:25:38.772 clat (usec): min=1359, max=458446, avg=105047.60, stdev=67893.97 00:25:38.772 lat (usec): min=1438, max=458467, avg=105793.04, stdev=68278.79 00:25:38.772 clat percentiles (msec): 00:25:38.772 | 1.00th=[ 9], 5.00th=[ 19], 10.00th=[ 30], 20.00th=[ 52], 00:25:38.772 | 30.00th=[ 67], 40.00th=[ 77], 50.00th=[ 86], 60.00th=[ 100], 00:25:38.772 | 70.00th=[ 133], 80.00th=[ 167], 90.00th=[ 194], 95.00th=[ 230], 00:25:38.772 | 99.00th=[ 271], 99.50th=[ 388], 99.90th=[ 456], 99.95th=[ 456], 00:25:38.772 | 99.99th=[ 460] 00:25:38.772 bw ( KiB/s): min=96256, max=250880, per=9.04%, avg=153907.20, stdev=40733.15, samples=20 00:25:38.772 iops : min= 376, max= 980, avg=601.20, stdev=159.11, samples=20 00:25:38.772 lat (msec) : 2=0.03%, 4=0.15%, 10=1.20%, 20=4.40%, 50=13.43% 00:25:38.772 lat (msec) : 100=41.48%, 250=37.10%, 500=2.21% 00:25:38.772 cpu : usr=0.31%, sys=1.81%, ctx=1840, majf=0, minf=4097 00:25:38.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:38.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.772 issued rwts: total=6075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.772 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.772 job10: (groupid=0, jobs=1): err= 0: pid=4112752: Mon Jul 15 20:31:15 2024 00:25:38.772 read: IOPS=668, BW=167MiB/s (175MB/s)(1683MiB/10061msec) 00:25:38.772 slat (usec): min=13, max=139900, avg=1460.65, stdev=4332.80 00:25:38.772 clat (msec): min=4, max=252, avg=94.14, stdev=47.08 00:25:38.772 lat (msec): min=4, max=288, avg=95.60, stdev=47.78 00:25:38.772 clat percentiles (msec): 00:25:38.772 | 1.00th=[ 43], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 54], 00:25:38.772 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 80], 60.00th=[ 91], 00:25:38.772 | 70.00th=[ 105], 80.00th=[ 132], 90.00th=[ 176], 95.00th=[ 192], 00:25:38.772 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 243], 99.95th=[ 243], 00:25:38.772 | 99.99th=[ 253] 00:25:38.772 bw ( KiB/s): min=83456, max=317440, per=10.02%, avg=170675.20, stdev=73110.54, samples=20 00:25:38.772 iops : min= 326, max= 1240, avg=666.70, stdev=285.59, samples=20 00:25:38.772 lat (msec) : 10=0.07%, 50=10.91%, 100=56.32%, 250=32.66%, 500=0.04% 00:25:38.772 cpu : usr=0.56%, sys=2.24%, ctx=1440, majf=0, minf=4097 00:25:38.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:38.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.772 issued rwts: total=6730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.772 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.772 00:25:38.772 Run status group 0 (all jobs): 00:25:38.772 READ: bw=1663MiB/s (1744MB/s), 115MiB/s-190MiB/s (121MB/s-200MB/s), io=16.5GiB (17.7GB), run=10017-10131msec 00:25:38.772 00:25:38.772 Disk stats (read/write): 00:25:38.772 nvme0n1: ios=14782/0, merge=0/0, ticks=1219210/0, in_queue=1219210, util=96.69% 00:25:38.772 nvme10n1: ios=11240/0, merge=0/0, ticks=1228888/0, in_queue=1228888, util=96.95% 00:25:38.772 nvme1n1: ios=14889/0, merge=0/0, ticks=1231777/0, in_queue=1231777, util=97.28% 00:25:38.772 nvme2n1: ios=10508/0, merge=0/0, ticks=1220849/0, in_queue=1220849, util=97.46% 00:25:38.772 nvme3n1: ios=9186/0, merge=0/0, ticks=1219915/0, in_queue=1219915, util=97.53% 00:25:38.772 nvme4n1: ios=11924/0, merge=0/0, ticks=1229219/0, in_queue=1229219, util=97.96% 00:25:38.772 nvme5n1: ios=10485/0, merge=0/0, ticks=1215364/0, in_queue=1215364, util=98.18% 00:25:38.772 nvme6n1: ios=10529/0, merge=0/0, ticks=1207615/0, in_queue=1207615, util=98.31% 00:25:38.772 nvme7n1: ios=13227/0, merge=0/0, ticks=1232651/0, in_queue=1232651, util=98.81% 00:25:38.772 nvme8n1: ios=11796/0, merge=0/0, ticks=1237522/0, in_queue=1237522, util=99.07% 00:25:38.772 nvme9n1: ios=13153/0, merge=0/0, ticks=1224309/0, in_queue=1224309, util=99.21% 00:25:38.772 20:31:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:38.772 [global] 00:25:38.772 thread=1 00:25:38.772 invalidate=1 00:25:38.772 rw=randwrite 00:25:38.772 time_based=1 00:25:38.772 runtime=10 00:25:38.772 ioengine=libaio 00:25:38.772 direct=1 00:25:38.772 bs=262144 00:25:38.772 iodepth=64 00:25:38.772 norandommap=1 00:25:38.772 numjobs=1 00:25:38.772 00:25:38.772 [job0] 00:25:38.772 filename=/dev/nvme0n1 00:25:38.772 [job1] 00:25:38.772 filename=/dev/nvme10n1 00:25:38.772 [job2] 00:25:38.772 filename=/dev/nvme1n1 00:25:38.772 [job3] 00:25:38.772 filename=/dev/nvme2n1 00:25:38.772 [job4] 00:25:38.772 filename=/dev/nvme3n1 00:25:38.772 [job5] 00:25:38.772 filename=/dev/nvme4n1 00:25:38.772 [job6] 00:25:38.772 filename=/dev/nvme5n1 00:25:38.772 [job7] 00:25:38.772 filename=/dev/nvme6n1 00:25:38.772 [job8] 00:25:38.772 filename=/dev/nvme7n1 00:25:38.772 [job9] 00:25:38.772 filename=/dev/nvme8n1 00:25:38.772 [job10] 00:25:38.772 filename=/dev/nvme9n1 00:25:38.772 Could not set queue depth (nvme0n1) 00:25:38.772 Could not set queue depth (nvme10n1) 00:25:38.772 Could not set queue depth (nvme1n1) 00:25:38.772 Could not set queue depth (nvme2n1) 00:25:38.772 Could not set queue depth (nvme3n1) 00:25:38.772 Could not set queue depth (nvme4n1) 00:25:38.772 Could not set queue depth (nvme5n1) 00:25:38.772 Could not set queue depth (nvme6n1) 00:25:38.772 Could not set queue depth (nvme7n1) 00:25:38.772 Could not set queue depth (nvme8n1) 00:25:38.772 Could not set queue depth (nvme9n1) 00:25:38.772 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.772 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.772 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.772 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.772 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.772 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.772 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.772 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.772 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.772 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.772 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.772 fio-3.35 00:25:38.772 Starting 11 threads 00:25:48.740 00:25:48.740 job0: (groupid=0, jobs=1): err= 0: pid=4113882: Mon Jul 15 20:31:26 2024 00:25:48.740 write: IOPS=589, BW=147MiB/s (155MB/s)(1485MiB/10066msec); 0 zone resets 00:25:48.740 slat (usec): min=19, max=43340, avg=1531.51, stdev=3247.84 00:25:48.740 clat (msec): min=2, max=344, avg=106.92, stdev=51.52 00:25:48.740 lat (msec): min=3, max=349, avg=108.45, stdev=52.09 00:25:48.740 clat percentiles (msec): 00:25:48.740 | 1.00th=[ 11], 5.00th=[ 55], 10.00th=[ 69], 20.00th=[ 72], 00:25:48.740 | 30.00th=[ 74], 40.00th=[ 77], 50.00th=[ 80], 60.00th=[ 106], 00:25:48.740 | 70.00th=[ 129], 80.00th=[ 155], 90.00th=[ 184], 95.00th=[ 209], 00:25:48.740 | 99.00th=[ 239], 99.50th=[ 275], 99.90th=[ 334], 99.95th=[ 342], 00:25:48.740 | 99.99th=[ 347] 00:25:48.740 bw ( KiB/s): min=77824, max=227840, per=12.48%, avg=150417.65, stdev=53573.60, samples=20 00:25:48.740 iops : min= 304, max= 890, avg=587.55, stdev=209.26, samples=20 00:25:48.740 lat (msec) : 4=0.05%, 10=0.86%, 20=1.20%, 50=2.46%, 100=54.23% 00:25:48.740 lat (msec) : 250=40.55%, 500=0.66% 00:25:48.740 cpu : usr=1.93%, sys=1.78%, ctx=1968, majf=0, minf=1 00:25:48.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:48.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.740 issued rwts: total=0,5938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.740 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.740 job1: (groupid=0, jobs=1): err= 0: pid=4113894: Mon Jul 15 20:31:26 2024 00:25:48.740 write: IOPS=344, BW=86.2MiB/s (90.4MB/s)(884MiB/10255msec); 0 zone resets 00:25:48.740 slat (usec): min=24, max=155784, avg=1755.32, stdev=6051.25 00:25:48.740 clat (usec): min=1820, max=621726, avg=183669.22, stdev=106288.59 00:25:48.740 lat (usec): min=1864, max=634482, avg=185424.54, stdev=107424.09 00:25:48.740 clat percentiles (msec): 00:25:48.740 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 51], 20.00th=[ 95], 00:25:48.740 | 30.00th=[ 123], 40.00th=[ 146], 50.00th=[ 174], 60.00th=[ 203], 00:25:48.740 | 70.00th=[ 234], 80.00th=[ 264], 90.00th=[ 300], 95.00th=[ 388], 00:25:48.740 | 99.00th=[ 493], 99.50th=[ 542], 99.90th=[ 617], 99.95th=[ 617], 00:25:48.740 | 99.99th=[ 625] 00:25:48.740 bw ( KiB/s): min=32256, max=176128, per=7.37%, avg=88890.75, stdev=34487.87, samples=20 00:25:48.740 iops : min= 126, max= 688, avg=347.20, stdev=134.73, samples=20 00:25:48.740 lat (msec) : 2=0.03%, 4=0.03%, 10=1.36%, 20=1.53%, 50=7.10% 00:25:48.740 lat (msec) : 100=10.80%, 250=53.99%, 500=24.41%, 750=0.76% 00:25:48.740 cpu : usr=0.85%, sys=1.25%, ctx=2207, majf=0, minf=1 00:25:48.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:25:48.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.740 issued rwts: total=0,3536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.740 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.740 job2: (groupid=0, jobs=1): err= 0: pid=4113895: Mon Jul 15 20:31:26 2024 00:25:48.740 write: IOPS=467, BW=117MiB/s (123MB/s)(1179MiB/10076msec); 0 zone resets 00:25:48.740 slat (usec): min=19, max=409224, avg=1443.66, stdev=8766.93 00:25:48.740 clat (msec): min=4, max=964, avg=135.30, stdev=121.24 00:25:48.740 lat (msec): min=4, max=964, avg=136.74, stdev=122.42 00:25:48.740 clat percentiles (msec): 00:25:48.740 | 1.00th=[ 14], 5.00th=[ 28], 10.00th=[ 46], 20.00th=[ 59], 00:25:48.740 | 30.00th=[ 80], 40.00th=[ 90], 50.00th=[ 107], 60.00th=[ 128], 00:25:48.740 | 70.00th=[ 153], 80.00th=[ 180], 90.00th=[ 251], 95.00th=[ 279], 00:25:48.740 | 99.00th=[ 911], 99.50th=[ 927], 99.90th=[ 961], 99.95th=[ 961], 00:25:48.740 | 99.99th=[ 961] 00:25:48.740 bw ( KiB/s): min= 4096, max=242176, per=9.88%, avg=119058.45, stdev=62017.47, samples=20 00:25:48.740 iops : min= 16, max= 946, avg=465.05, stdev=242.27, samples=20 00:25:48.740 lat (msec) : 10=0.59%, 20=1.15%, 50=9.91%, 100=35.66%, 250=42.49% 00:25:48.740 lat (msec) : 500=8.17%, 750=0.78%, 1000=1.25% 00:25:48.740 cpu : usr=1.25%, sys=1.62%, ctx=2554, majf=0, minf=1 00:25:48.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:48.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.741 issued rwts: total=0,4714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.741 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.741 job3: (groupid=0, jobs=1): err= 0: pid=4113896: Mon Jul 15 20:31:26 2024 00:25:48.741 write: IOPS=315, BW=78.9MiB/s (82.7MB/s)(803MiB/10170msec); 0 zone resets 00:25:48.741 slat (usec): min=22, max=374713, avg=2880.73, stdev=10922.25 00:25:48.741 clat (msec): min=6, max=861, avg=199.79, stdev=129.22 00:25:48.741 lat (msec): min=7, max=861, avg=202.67, stdev=130.53 00:25:48.741 clat percentiles (msec): 00:25:48.741 | 1.00th=[ 32], 5.00th=[ 60], 10.00th=[ 75], 20.00th=[ 103], 00:25:48.741 | 30.00th=[ 136], 40.00th=[ 155], 50.00th=[ 180], 60.00th=[ 199], 00:25:48.741 | 70.00th=[ 236], 80.00th=[ 264], 90.00th=[ 309], 95.00th=[ 397], 00:25:48.741 | 99.00th=[ 785], 99.50th=[ 802], 99.90th=[ 810], 99.95th=[ 860], 00:25:48.741 | 99.99th=[ 860] 00:25:48.741 bw ( KiB/s): min=26624, max=186368, per=6.68%, avg=80558.05, stdev=37785.11, samples=20 00:25:48.741 iops : min= 104, max= 728, avg=314.65, stdev=147.62, samples=20 00:25:48.741 lat (msec) : 10=0.06%, 20=0.44%, 50=2.87%, 100=15.83%, 250=57.07% 00:25:48.741 lat (msec) : 500=19.88%, 750=2.40%, 1000=1.46% 00:25:48.741 cpu : usr=1.01%, sys=1.08%, ctx=1152, majf=0, minf=1 00:25:48.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:48.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.741 issued rwts: total=0,3210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.741 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.741 job4: (groupid=0, jobs=1): err= 0: pid=4113897: Mon Jul 15 20:31:26 2024 00:25:48.741 write: IOPS=355, BW=88.9MiB/s (93.2MB/s)(911MiB/10250msec); 0 zone resets 00:25:48.741 slat (usec): min=19, max=76206, avg=1738.53, stdev=4907.39 00:25:48.741 clat (msec): min=2, max=858, avg=178.12, stdev=116.36 00:25:48.741 lat (msec): min=2, max=858, avg=179.86, stdev=117.06 00:25:48.741 clat percentiles (msec): 00:25:48.741 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 50], 20.00th=[ 88], 00:25:48.741 | 30.00th=[ 115], 40.00th=[ 146], 50.00th=[ 178], 60.00th=[ 201], 00:25:48.741 | 70.00th=[ 224], 80.00th=[ 251], 90.00th=[ 275], 95.00th=[ 296], 00:25:48.741 | 99.00th=[ 802], 99.50th=[ 844], 99.90th=[ 852], 99.95th=[ 852], 00:25:48.741 | 99.99th=[ 860] 00:25:48.741 bw ( KiB/s): min=16384, max=210944, per=7.60%, avg=91676.85, stdev=40896.48, samples=20 00:25:48.741 iops : min= 64, max= 824, avg=358.05, stdev=159.77, samples=20 00:25:48.741 lat (msec) : 4=0.14%, 10=1.10%, 20=1.84%, 50=7.16%, 100=14.05% 00:25:48.741 lat (msec) : 250=55.17%, 500=18.93%, 750=0.08%, 1000=1.54% 00:25:48.741 cpu : usr=1.10%, sys=1.15%, ctx=2219, majf=0, minf=1 00:25:48.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:48.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.741 issued rwts: total=0,3645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.741 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.741 job5: (groupid=0, jobs=1): err= 0: pid=4113898: Mon Jul 15 20:31:26 2024 00:25:48.741 write: IOPS=479, BW=120MiB/s (126MB/s)(1208MiB/10084msec); 0 zone resets 00:25:48.741 slat (usec): min=22, max=101931, avg=1167.90, stdev=4142.81 00:25:48.741 clat (msec): min=2, max=801, avg=132.12, stdev=115.54 00:25:48.741 lat (msec): min=2, max=801, avg=133.29, stdev=116.37 00:25:48.741 clat percentiles (msec): 00:25:48.741 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 40], 00:25:48.741 | 30.00th=[ 83], 40.00th=[ 107], 50.00th=[ 118], 60.00th=[ 132], 00:25:48.741 | 70.00th=[ 157], 80.00th=[ 190], 90.00th=[ 239], 95.00th=[ 275], 00:25:48.741 | 99.00th=[ 659], 99.50th=[ 785], 99.90th=[ 802], 99.95th=[ 802], 00:25:48.741 | 99.99th=[ 802] 00:25:48.741 bw ( KiB/s): min=67584, max=195584, per=10.12%, avg=122070.90, stdev=35961.50, samples=20 00:25:48.741 iops : min= 264, max= 764, avg=476.80, stdev=140.50, samples=20 00:25:48.741 lat (msec) : 4=4.72%, 10=6.21%, 20=2.65%, 50=8.45%, 100=14.68% 00:25:48.741 lat (msec) : 250=55.81%, 500=5.15%, 750=1.47%, 1000=0.87% 00:25:48.741 cpu : usr=1.48%, sys=1.81%, ctx=3270, majf=0, minf=1 00:25:48.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:48.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.741 issued rwts: total=0,4831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.741 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.741 job6: (groupid=0, jobs=1): err= 0: pid=4113900: Mon Jul 15 20:31:26 2024 00:25:48.741 write: IOPS=418, BW=105MiB/s (110MB/s)(1064MiB/10170msec); 0 zone resets 00:25:48.741 slat (usec): min=23, max=226297, avg=1698.16, stdev=7102.77 00:25:48.741 clat (msec): min=2, max=689, avg=150.68, stdev=98.83 00:25:48.741 lat (msec): min=3, max=689, avg=152.38, stdev=100.14 00:25:48.741 clat percentiles (msec): 00:25:48.741 | 1.00th=[ 12], 5.00th=[ 33], 10.00th=[ 50], 20.00th=[ 73], 00:25:48.741 | 30.00th=[ 97], 40.00th=[ 121], 50.00th=[ 134], 60.00th=[ 148], 00:25:48.741 | 70.00th=[ 169], 80.00th=[ 228], 90.00th=[ 271], 95.00th=[ 305], 00:25:48.741 | 99.00th=[ 642], 99.50th=[ 659], 99.90th=[ 676], 99.95th=[ 693], 00:25:48.741 | 99.99th=[ 693] 00:25:48.741 bw ( KiB/s): min=26624, max=237568, per=8.90%, avg=107352.45, stdev=49090.97, samples=20 00:25:48.741 iops : min= 104, max= 928, avg=419.30, stdev=191.75, samples=20 00:25:48.741 lat (msec) : 4=0.07%, 10=0.59%, 20=1.60%, 50=9.61%, 100=20.09% 00:25:48.741 lat (msec) : 250=53.12%, 500=13.53%, 750=1.39% 00:25:48.741 cpu : usr=1.26%, sys=1.59%, ctx=2429, majf=0, minf=1 00:25:48.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:48.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.741 issued rwts: total=0,4256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.741 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.741 job7: (groupid=0, jobs=1): err= 0: pid=4113906: Mon Jul 15 20:31:26 2024 00:25:48.741 write: IOPS=608, BW=152MiB/s (160MB/s)(1531MiB/10065msec); 0 zone resets 00:25:48.741 slat (usec): min=16, max=25350, avg=1398.09, stdev=2937.44 00:25:48.741 clat (msec): min=3, max=405, avg=103.74, stdev=45.09 00:25:48.741 lat (msec): min=3, max=405, avg=105.14, stdev=45.54 00:25:48.741 clat percentiles (msec): 00:25:48.741 | 1.00th=[ 21], 5.00th=[ 67], 10.00th=[ 70], 20.00th=[ 72], 00:25:48.741 | 30.00th=[ 74], 40.00th=[ 77], 50.00th=[ 80], 60.00th=[ 93], 00:25:48.741 | 70.00th=[ 127], 80.00th=[ 148], 90.00th=[ 167], 95.00th=[ 184], 00:25:48.741 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 347], 99.95th=[ 380], 00:25:48.741 | 99.99th=[ 405] 00:25:48.741 bw ( KiB/s): min=72192, max=227840, per=12.87%, avg=155199.70, stdev=52560.33, samples=20 00:25:48.741 iops : min= 282, max= 890, avg=606.20, stdev=205.34, samples=20 00:25:48.741 lat (msec) : 4=0.02%, 10=0.15%, 20=0.83%, 50=2.27%, 100=59.02% 00:25:48.741 lat (msec) : 250=37.44%, 500=0.28% 00:25:48.742 cpu : usr=1.91%, sys=1.99%, ctx=2196, majf=0, minf=1 00:25:48.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:48.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.742 issued rwts: total=0,6125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.742 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.742 job8: (groupid=0, jobs=1): err= 0: pid=4113907: Mon Jul 15 20:31:26 2024 00:25:48.742 write: IOPS=309, BW=77.3MiB/s (81.1MB/s)(786MiB/10172msec); 0 zone resets 00:25:48.742 slat (usec): min=14, max=414948, avg=2539.17, stdev=11933.46 00:25:48.742 clat (msec): min=3, max=800, avg=204.36, stdev=126.25 00:25:48.742 lat (msec): min=3, max=800, avg=206.90, stdev=127.54 00:25:48.742 clat percentiles (msec): 00:25:48.742 | 1.00th=[ 21], 5.00th=[ 43], 10.00th=[ 75], 20.00th=[ 117], 00:25:48.742 | 30.00th=[ 150], 40.00th=[ 163], 50.00th=[ 188], 60.00th=[ 222], 00:25:48.742 | 70.00th=[ 245], 80.00th=[ 264], 90.00th=[ 292], 95.00th=[ 347], 00:25:48.742 | 99.00th=[ 776], 99.50th=[ 776], 99.90th=[ 793], 99.95th=[ 802], 00:25:48.742 | 99.99th=[ 802] 00:25:48.742 bw ( KiB/s): min=28672, max=123126, per=6.55%, avg=78911.50, stdev=27948.42, samples=20 00:25:48.742 iops : min= 112, max= 480, avg=308.20, stdev=109.09, samples=20 00:25:48.742 lat (msec) : 4=0.03%, 10=0.25%, 20=0.73%, 50=5.31%, 100=9.28% 00:25:48.742 lat (msec) : 250=58.25%, 500=22.13%, 750=2.38%, 1000=1.62% 00:25:48.742 cpu : usr=0.74%, sys=1.02%, ctx=1592, majf=0, minf=1 00:25:48.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:48.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.742 issued rwts: total=0,3145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.742 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.742 job9: (groupid=0, jobs=1): err= 0: pid=4113908: Mon Jul 15 20:31:26 2024 00:25:48.742 write: IOPS=321, BW=80.5MiB/s (84.4MB/s)(825MiB/10250msec); 0 zone resets 00:25:48.742 slat (usec): min=19, max=414388, avg=2200.05, stdev=11895.08 00:25:48.742 clat (msec): min=3, max=881, avg=196.42, stdev=134.01 00:25:48.742 lat (msec): min=3, max=881, avg=198.62, stdev=135.57 00:25:48.742 clat percentiles (msec): 00:25:48.742 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 57], 20.00th=[ 101], 00:25:48.742 | 30.00th=[ 138], 40.00th=[ 161], 50.00th=[ 182], 60.00th=[ 207], 00:25:48.742 | 70.00th=[ 236], 80.00th=[ 259], 90.00th=[ 292], 95.00th=[ 380], 00:25:48.742 | 99.00th=[ 827], 99.50th=[ 860], 99.90th=[ 877], 99.95th=[ 885], 00:25:48.742 | 99.99th=[ 885] 00:25:48.742 bw ( KiB/s): min=16384, max=144384, per=6.87%, avg=82867.20, stdev=32558.60, samples=20 00:25:48.742 iops : min= 64, max= 564, avg=323.70, stdev=127.18, samples=20 00:25:48.742 lat (msec) : 4=0.12%, 10=1.30%, 20=3.18%, 50=3.82%, 100=11.55% 00:25:48.742 lat (msec) : 250=55.48%, 500=20.48%, 750=2.64%, 1000=1.42% 00:25:48.742 cpu : usr=0.99%, sys=1.08%, ctx=1936, majf=0, minf=1 00:25:48.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:48.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.742 issued rwts: total=0,3300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.742 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.742 job10: (groupid=0, jobs=1): err= 0: pid=4113909: Mon Jul 15 20:31:26 2024 00:25:48.742 write: IOPS=550, BW=138MiB/s (144MB/s)(1399MiB/10172msec); 0 zone resets 00:25:48.742 slat (usec): min=19, max=64366, avg=1118.28, stdev=3343.46 00:25:48.742 clat (msec): min=2, max=336, avg=114.96, stdev=73.61 00:25:48.742 lat (msec): min=2, max=336, avg=116.08, stdev=74.44 00:25:48.742 clat percentiles (msec): 00:25:48.742 | 1.00th=[ 9], 5.00th=[ 26], 10.00th=[ 43], 20.00th=[ 62], 00:25:48.742 | 30.00th=[ 69], 40.00th=[ 77], 50.00th=[ 88], 60.00th=[ 107], 00:25:48.742 | 70.00th=[ 140], 80.00th=[ 176], 90.00th=[ 232], 95.00th=[ 275], 00:25:48.742 | 99.00th=[ 309], 99.50th=[ 321], 99.90th=[ 334], 99.95th=[ 334], 00:25:48.742 | 99.99th=[ 338] 00:25:48.742 bw ( KiB/s): min=59392, max=240640, per=11.75%, avg=141667.55, stdev=58072.34, samples=20 00:25:48.742 iops : min= 232, max= 940, avg=553.35, stdev=226.78, samples=20 00:25:48.742 lat (msec) : 4=0.05%, 10=1.41%, 20=2.02%, 50=9.24%, 100=44.12% 00:25:48.742 lat (msec) : 250=34.19%, 500=8.97% 00:25:48.742 cpu : usr=1.51%, sys=1.85%, ctx=3236, majf=0, minf=1 00:25:48.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:48.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.742 issued rwts: total=0,5596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.742 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.742 00:25:48.742 Run status group 0 (all jobs): 00:25:48.742 WRITE: bw=1177MiB/s (1235MB/s), 77.3MiB/s-152MiB/s (81.1MB/s-160MB/s), io=11.8GiB (12.7GB), run=10065-10255msec 00:25:48.742 00:25:48.742 Disk stats (read/write): 00:25:48.742 nvme0n1: ios=49/11618, merge=0/0, ticks=48/1212621, in_queue=1212669, util=97.30% 00:25:48.742 nvme10n1: ios=46/7022, merge=0/0, ticks=2466/1244678, in_queue=1247144, util=99.99% 00:25:48.742 nvme1n1: ios=45/9224, merge=0/0, ticks=1872/1212012, in_queue=1213884, util=99.99% 00:25:48.742 nvme2n1: ios=49/6411, merge=0/0, ticks=2036/1235079, in_queue=1237115, util=100.00% 00:25:48.742 nvme3n1: ios=0/7240, merge=0/0, ticks=0/1244405, in_queue=1244405, util=97.78% 00:25:48.742 nvme4n1: ios=43/9402, merge=0/0, ticks=2094/1217244, in_queue=1219338, util=100.00% 00:25:48.742 nvme5n1: ios=43/8349, merge=0/0, ticks=3061/1208730, in_queue=1211791, util=100.00% 00:25:48.742 nvme6n1: ios=38/11996, merge=0/0, ticks=206/1215285, in_queue=1215491, util=99.88% 00:25:48.742 nvme7n1: ios=40/6289, merge=0/0, ticks=175/1243007, in_queue=1243182, util=99.90% 00:25:48.742 nvme8n1: ios=45/6555, merge=0/0, ticks=2414/1225926, in_queue=1228340, util=100.00% 00:25:48.742 nvme9n1: ios=47/11191, merge=0/0, ticks=304/1250355, in_queue=1250659, util=100.00% 00:25:48.742 20:31:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:48.742 20:31:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:48.742 20:31:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.742 20:31:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:48.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:48.742 20:31:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:48.742 20:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:48.742 20:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:48.742 20:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:48.742 20:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:48.742 20:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:48.743 20:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:48.743 20:31:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:48.743 20:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.743 20:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.743 20:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.743 20:31:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.743 20:31:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:48.743 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:48.743 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:48.743 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:48.743 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:48.743 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:48.743 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:48.743 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:48.743 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:48.743 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:48.743 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.743 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.743 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.743 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.743 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:49.001 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:49.001 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:49.001 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:49.001 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:49.001 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:49.001 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:49.001 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:49.001 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:49.001 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:49.001 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.001 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.001 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.001 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.001 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:49.261 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:49.261 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:49.261 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:49.261 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:49.261 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:49.261 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:49.261 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:49.261 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:49.261 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:49.261 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.261 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.261 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.261 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.261 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:49.261 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:49.522 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.522 20:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:49.781 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:49.781 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:49.781 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:49.781 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:49.781 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:49.781 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:49.781 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:49.781 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:49.781 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:49.781 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.781 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.781 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.781 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.781 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:50.040 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:50.040 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.040 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:50.298 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:50.298 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.298 rmmod nvme_tcp 00:25:50.298 rmmod nvme_fabrics 00:25:50.298 rmmod nvme_keyring 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 4108463 ']' 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 4108463 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 4108463 ']' 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 4108463 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4108463 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4108463' 00:25:50.298 killing process with pid 4108463 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 4108463 00:25:50.298 20:31:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 4108463 00:25:50.863 20:31:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:50.863 20:31:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:50.863 20:31:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:50.863 20:31:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.863 20:31:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.863 20:31:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.863 20:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.863 20:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.407 20:31:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:53.407 00:25:53.407 real 1m0.253s 00:25:53.407 user 3m15.538s 00:25:53.407 sys 0m23.484s 00:25:53.407 20:31:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:53.407 20:31:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.407 ************************************ 00:25:53.407 END TEST nvmf_multiconnection 00:25:53.407 ************************************ 00:25:53.407 20:31:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:53.407 20:31:31 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:53.407 20:31:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:53.407 20:31:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:53.407 20:31:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:53.407 ************************************ 00:25:53.407 START TEST nvmf_initiator_timeout 00:25:53.407 ************************************ 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:53.407 * Looking for test storage... 00:25:53.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:53.407 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:53.408 20:31:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:55.311 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:55.311 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:55.311 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:55.311 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:55.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:25:55.311 00:25:55.311 --- 10.0.0.2 ping statistics --- 00:25:55.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.311 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:25:55.311 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:25:55.311 00:25:55.311 --- 10.0.0.1 ping statistics --- 00:25:55.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.312 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=4117096 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 4117096 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 4117096 ']' 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:55.312 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.312 [2024-07-15 20:31:33.676329] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:25:55.312 [2024-07-15 20:31:33.676421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.312 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.312 [2024-07-15 20:31:33.746828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.312 [2024-07-15 20:31:33.837603] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.312 [2024-07-15 20:31:33.837663] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.312 [2024-07-15 20:31:33.837689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.312 [2024-07-15 20:31:33.837702] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.312 [2024-07-15 20:31:33.837714] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.312 [2024-07-15 20:31:33.837810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.312 [2024-07-15 20:31:33.837903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.312 [2024-07-15 20:31:33.837986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:55.312 [2024-07-15 20:31:33.837989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.570 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:55.570 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:55.570 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:55.570 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:55.570 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.570 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.570 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:55.570 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:55.570 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.570 20:31:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.570 Malloc0 00:25:55.570 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.570 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:55.570 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.570 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.570 Delay0 00:25:55.570 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.570 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:55.570 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.570 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.570 [2024-07-15 20:31:34.022216] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.570 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.571 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:55.571 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.571 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.571 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.571 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:55.571 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.571 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.571 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.571 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.571 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.571 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.571 [2024-07-15 20:31:34.050514] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.571 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.571 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:56.138 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:56.138 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:56.138 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:56.138 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:56.138 20:31:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:58.665 20:31:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:58.665 20:31:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:58.665 20:31:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:58.665 20:31:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:58.665 20:31:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:58.665 20:31:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:58.665 20:31:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=4117519 00:25:58.665 20:31:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:58.665 20:31:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:58.665 [global] 00:25:58.665 thread=1 00:25:58.665 invalidate=1 00:25:58.665 rw=write 00:25:58.665 time_based=1 00:25:58.666 runtime=60 00:25:58.666 ioengine=libaio 00:25:58.666 direct=1 00:25:58.666 bs=4096 00:25:58.666 iodepth=1 00:25:58.666 norandommap=0 00:25:58.666 numjobs=1 00:25:58.666 00:25:58.666 verify_dump=1 00:25:58.666 verify_backlog=512 00:25:58.666 verify_state_save=0 00:25:58.666 do_verify=1 00:25:58.666 verify=crc32c-intel 00:25:58.666 [job0] 00:25:58.666 filename=/dev/nvme0n1 00:25:58.666 Could not set queue depth (nvme0n1) 00:25:58.666 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:58.666 fio-3.35 00:25:58.666 Starting 1 thread 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.193 true 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.193 true 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.193 true 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.193 true 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.193 20:31:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.481 true 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.481 true 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.481 true 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.481 true 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:04.481 20:31:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 4117519 00:27:00.750 00:27:00.750 job0: (groupid=0, jobs=1): err= 0: pid=4117588: Mon Jul 15 20:32:37 2024 00:27:00.750 read: IOPS=116, BW=466KiB/s (477kB/s)(27.3MiB/60025msec) 00:27:00.750 slat (usec): min=5, max=16579, avg=19.72, stdev=236.83 00:27:00.750 clat (usec): min=329, max=46068, avg=2338.36, stdev=8576.49 00:27:00.750 lat (usec): min=335, max=46088, avg=2358.08, stdev=8580.90 00:27:00.750 clat percentiles (usec): 00:27:00.750 | 1.00th=[ 347], 5.00th=[ 359], 10.00th=[ 371], 20.00th=[ 388], 00:27:00.750 | 30.00th=[ 408], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 465], 00:27:00.750 | 70.00th=[ 510], 80.00th=[ 578], 90.00th=[ 627], 95.00th=[ 742], 00:27:00.750 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:27:00.750 | 99.99th=[45876] 00:27:00.750 write: IOPS=119, BW=478KiB/s (489kB/s)(28.0MiB/60025msec); 0 zone resets 00:27:00.750 slat (nsec): min=6991, max=89734, avg=19706.99, stdev=11827.10 00:27:00.750 clat (usec): min=221, max=40934k, avg=6045.78, stdev=483488.94 00:27:00.750 lat (usec): min=229, max=40934k, avg=6065.49, stdev=483488.85 00:27:00.750 clat percentiles (usec): 00:27:00.750 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 251], 00:27:00.750 | 20.00th=[ 269], 30.00th=[ 293], 40.00th=[ 310], 00:27:00.750 | 50.00th=[ 322], 60.00th=[ 343], 70.00th=[ 375], 00:27:00.750 | 80.00th=[ 396], 90.00th=[ 433], 95.00th=[ 457], 00:27:00.750 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 676], 00:27:00.750 | 99.95th=[ 865], 99.99th=[17112761] 00:27:00.750 bw ( KiB/s): min= 1424, max= 6168, per=100.00%, avg=4411.08, stdev=1237.30, samples=13 00:27:00.750 iops : min= 356, max= 1542, avg=1102.77, stdev=309.33, samples=13 00:27:00.750 lat (usec) : 250=4.94%, 500=78.78%, 750=13.87%, 1000=0.13% 00:27:00.750 lat (msec) : 2=0.01%, 4=0.01%, 50=2.25%, >=2000=0.01% 00:27:00.750 cpu : usr=0.35%, sys=0.54%, ctx=14157, majf=0, minf=2 00:27:00.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:00.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.750 issued rwts: total=6987,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:00.750 00:27:00.750 Run status group 0 (all jobs): 00:27:00.750 READ: bw=466KiB/s (477kB/s), 466KiB/s-466KiB/s (477kB/s-477kB/s), io=27.3MiB (28.6MB), run=60025-60025msec 00:27:00.750 WRITE: bw=478KiB/s (489kB/s), 478KiB/s-478KiB/s (489kB/s-489kB/s), io=28.0MiB (29.4MB), run=60025-60025msec 00:27:00.750 00:27:00.750 Disk stats (read/write): 00:27:00.750 nvme0n1: ios=7083/7168, merge=0/0, ticks=16162/2212, in_queue=18374, util=99.48% 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:00.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:00.750 nvmf hotplug test: fio successful as expected 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:00.750 rmmod nvme_tcp 00:27:00.750 rmmod nvme_fabrics 00:27:00.750 rmmod nvme_keyring 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 4117096 ']' 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 4117096 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 4117096 ']' 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 4117096 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4117096 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4117096' 00:27:00.750 killing process with pid 4117096 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 4117096 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 4117096 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:00.750 20:32:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.010 20:32:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:01.010 00:27:01.010 real 1m8.111s 00:27:01.010 user 4m10.131s 00:27:01.010 sys 0m7.365s 00:27:01.010 20:32:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.010 20:32:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.010 ************************************ 00:27:01.010 END TEST nvmf_initiator_timeout 00:27:01.010 ************************************ 00:27:01.010 20:32:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:01.010 20:32:39 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:01.010 20:32:39 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:01.010 20:32:39 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:01.010 20:32:39 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:01.010 20:32:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:02.911 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:02.911 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:02.911 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:02.911 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:02.911 20:32:41 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:02.911 20:32:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:02.911 20:32:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.911 20:32:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.170 ************************************ 00:27:03.170 START TEST nvmf_perf_adq 00:27:03.170 ************************************ 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:03.170 * Looking for test storage... 00:27:03.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:03.170 20:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.075 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:05.076 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:05.076 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:05.076 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:05.076 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:05.076 20:32:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:06.015 20:32:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:07.919 20:32:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:13.190 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:13.190 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:13.190 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:13.190 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:13.190 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:13.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:27:13.191 00:27:13.191 --- 10.0.0.2 ping statistics --- 00:27:13.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.191 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:27:13.191 00:27:13.191 --- 10.0.0.1 ping statistics --- 00:27:13.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.191 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4129096 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4129096 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 4129096 ']' 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.191 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.191 [2024-07-15 20:32:51.518492] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:27:13.191 [2024-07-15 20:32:51.518561] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.191 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.191 [2024-07-15 20:32:51.588750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.191 [2024-07-15 20:32:51.682371] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.191 [2024-07-15 20:32:51.682442] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.191 [2024-07-15 20:32:51.682459] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.191 [2024-07-15 20:32:51.682472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.191 [2024-07-15 20:32:51.682485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.191 [2024-07-15 20:32:51.682541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.191 [2024-07-15 20:32:51.682613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.191 [2024-07-15 20:32:51.682710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:13.191 [2024-07-15 20:32:51.682713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.449 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.449 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:13.449 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:13.449 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:13.449 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.449 20:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.449 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:13.449 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:13.449 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:13.449 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.449 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.449 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.450 [2024-07-15 20:32:51.940803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.450 Malloc1 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.450 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.708 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.708 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:13.708 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.708 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.708 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.708 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:13.708 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.708 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.708 [2024-07-15 20:32:51.994021] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.708 20:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.708 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=4129243 00:27:13.708 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:13.708 20:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:13.708 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.609 20:32:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:15.609 20:32:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.609 20:32:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:15.609 20:32:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.609 20:32:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:15.609 "tick_rate": 2700000000, 00:27:15.609 "poll_groups": [ 00:27:15.609 { 00:27:15.609 "name": "nvmf_tgt_poll_group_000", 00:27:15.609 "admin_qpairs": 1, 00:27:15.609 "io_qpairs": 1, 00:27:15.609 "current_admin_qpairs": 1, 00:27:15.609 "current_io_qpairs": 1, 00:27:15.609 "pending_bdev_io": 0, 00:27:15.609 "completed_nvme_io": 20663, 00:27:15.609 "transports": [ 00:27:15.609 { 00:27:15.609 "trtype": "TCP" 00:27:15.609 } 00:27:15.609 ] 00:27:15.609 }, 00:27:15.609 { 00:27:15.609 "name": "nvmf_tgt_poll_group_001", 00:27:15.609 "admin_qpairs": 0, 00:27:15.609 "io_qpairs": 1, 00:27:15.609 "current_admin_qpairs": 0, 00:27:15.609 "current_io_qpairs": 1, 00:27:15.609 "pending_bdev_io": 0, 00:27:15.609 "completed_nvme_io": 19737, 00:27:15.609 "transports": [ 00:27:15.609 { 00:27:15.609 "trtype": "TCP" 00:27:15.609 } 00:27:15.609 ] 00:27:15.609 }, 00:27:15.609 { 00:27:15.609 "name": "nvmf_tgt_poll_group_002", 00:27:15.609 "admin_qpairs": 0, 00:27:15.609 "io_qpairs": 1, 00:27:15.609 "current_admin_qpairs": 0, 00:27:15.609 "current_io_qpairs": 1, 00:27:15.609 "pending_bdev_io": 0, 00:27:15.609 "completed_nvme_io": 20894, 00:27:15.609 "transports": [ 00:27:15.609 { 00:27:15.609 "trtype": "TCP" 00:27:15.609 } 00:27:15.609 ] 00:27:15.609 }, 00:27:15.609 { 00:27:15.609 "name": "nvmf_tgt_poll_group_003", 00:27:15.609 "admin_qpairs": 0, 00:27:15.609 "io_qpairs": 1, 00:27:15.609 "current_admin_qpairs": 0, 00:27:15.609 "current_io_qpairs": 1, 00:27:15.609 "pending_bdev_io": 0, 00:27:15.609 "completed_nvme_io": 20357, 00:27:15.609 "transports": [ 00:27:15.609 { 00:27:15.609 "trtype": "TCP" 00:27:15.609 } 00:27:15.609 ] 00:27:15.609 } 00:27:15.609 ] 00:27:15.609 }' 00:27:15.609 20:32:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:15.609 20:32:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:15.609 20:32:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:15.609 20:32:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:15.609 20:32:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 4129243 00:27:23.746 Initializing NVMe Controllers 00:27:23.746 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:23.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:23.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:23.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:23.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:23.746 Initialization complete. Launching workers. 00:27:23.746 ======================================================== 00:27:23.746 Latency(us) 00:27:23.746 Device Information : IOPS MiB/s Average min max 00:27:23.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10955.20 42.79 5843.04 1980.09 8210.96 00:27:23.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10359.80 40.47 6177.69 2042.19 9660.79 00:27:23.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10687.90 41.75 5987.85 3183.75 7703.94 00:27:23.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10811.50 42.23 5920.44 1983.40 8386.63 00:27:23.746 ======================================================== 00:27:23.746 Total : 42814.40 167.24 5979.71 1980.09 9660.79 00:27:23.746 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.746 rmmod nvme_tcp 00:27:23.746 rmmod nvme_fabrics 00:27:23.746 rmmod nvme_keyring 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4129096 ']' 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4129096 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 4129096 ']' 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 4129096 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4129096 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4129096' 00:27:23.746 killing process with pid 4129096 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 4129096 00:27:23.746 20:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 4129096 00:27:24.003 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:24.003 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:24.003 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:24.003 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:24.003 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:24.003 20:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.003 20:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.003 20:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.535 20:33:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:26.535 20:33:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:26.535 20:33:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:26.793 20:33:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:28.694 20:33:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:33.964 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:33.965 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:33.965 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:33.965 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:33.965 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:33.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:27:33.965 00:27:33.965 --- 10.0.0.2 ping statistics --- 00:27:33.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.965 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:27:33.965 00:27:33.965 --- 10.0.0.1 ping statistics --- 00:27:33.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.965 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:33.965 net.core.busy_poll = 1 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:33.965 net.core.busy_read = 1 00:27:33.965 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4132471 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4132471 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 4132471 ']' 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:33.966 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 [2024-07-15 20:33:12.466456] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:27:33.966 [2024-07-15 20:33:12.466549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.223 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.223 [2024-07-15 20:33:12.537007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:34.223 [2024-07-15 20:33:12.629945] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.223 [2024-07-15 20:33:12.630004] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.223 [2024-07-15 20:33:12.630020] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.223 [2024-07-15 20:33:12.630033] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.223 [2024-07-15 20:33:12.630043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.223 [2024-07-15 20:33:12.630099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.223 [2024-07-15 20:33:12.630130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.223 [2024-07-15 20:33:12.630254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.223 [2024-07-15 20:33:12.630256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.223 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:34.223 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:34.223 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:34.223 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:34.223 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.223 20:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.223 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:34.223 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:34.223 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:34.223 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.223 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.482 [2024-07-15 20:33:12.913960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.482 Malloc1 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.482 [2024-07-15 20:33:12.967050] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=4132502 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:34.482 20:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:34.482 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.007 20:33:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:37.007 20:33:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.007 20:33:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.007 20:33:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.007 20:33:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:37.007 "tick_rate": 2700000000, 00:27:37.007 "poll_groups": [ 00:27:37.007 { 00:27:37.007 "name": "nvmf_tgt_poll_group_000", 00:27:37.007 "admin_qpairs": 1, 00:27:37.007 "io_qpairs": 2, 00:27:37.007 "current_admin_qpairs": 1, 00:27:37.007 "current_io_qpairs": 2, 00:27:37.007 "pending_bdev_io": 0, 00:27:37.007 "completed_nvme_io": 22410, 00:27:37.007 "transports": [ 00:27:37.007 { 00:27:37.007 "trtype": "TCP" 00:27:37.007 } 00:27:37.007 ] 00:27:37.007 }, 00:27:37.007 { 00:27:37.007 "name": "nvmf_tgt_poll_group_001", 00:27:37.007 "admin_qpairs": 0, 00:27:37.007 "io_qpairs": 2, 00:27:37.007 "current_admin_qpairs": 0, 00:27:37.007 "current_io_qpairs": 2, 00:27:37.007 "pending_bdev_io": 0, 00:27:37.007 "completed_nvme_io": 23962, 00:27:37.007 "transports": [ 00:27:37.007 { 00:27:37.007 "trtype": "TCP" 00:27:37.007 } 00:27:37.007 ] 00:27:37.007 }, 00:27:37.007 { 00:27:37.007 "name": "nvmf_tgt_poll_group_002", 00:27:37.007 "admin_qpairs": 0, 00:27:37.007 "io_qpairs": 0, 00:27:37.007 "current_admin_qpairs": 0, 00:27:37.007 "current_io_qpairs": 0, 00:27:37.007 "pending_bdev_io": 0, 00:27:37.007 "completed_nvme_io": 0, 00:27:37.007 "transports": [ 00:27:37.007 { 00:27:37.007 "trtype": "TCP" 00:27:37.007 } 00:27:37.007 ] 00:27:37.007 }, 00:27:37.007 { 00:27:37.007 "name": "nvmf_tgt_poll_group_003", 00:27:37.007 "admin_qpairs": 0, 00:27:37.007 "io_qpairs": 0, 00:27:37.007 "current_admin_qpairs": 0, 00:27:37.007 "current_io_qpairs": 0, 00:27:37.007 "pending_bdev_io": 0, 00:27:37.007 "completed_nvme_io": 0, 00:27:37.007 "transports": [ 00:27:37.007 { 00:27:37.007 "trtype": "TCP" 00:27:37.007 } 00:27:37.007 ] 00:27:37.007 } 00:27:37.007 ] 00:27:37.007 }' 00:27:37.007 20:33:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:37.007 20:33:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:37.007 20:33:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:37.007 20:33:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:37.007 20:33:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 4132502 00:27:45.150 Initializing NVMe Controllers 00:27:45.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:45.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:45.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:45.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:45.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:45.150 Initialization complete. Launching workers. 00:27:45.150 ======================================================== 00:27:45.150 Latency(us) 00:27:45.150 Device Information : IOPS MiB/s Average min max 00:27:45.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6002.30 23.45 10665.93 1263.39 56523.07 00:27:45.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6374.40 24.90 10075.21 1543.06 58331.18 00:27:45.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5962.90 23.29 10739.54 1858.63 58105.73 00:27:45.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6072.00 23.72 10547.05 1696.30 55625.85 00:27:45.150 ======================================================== 00:27:45.150 Total : 24411.60 95.36 10500.09 1263.39 58331.18 00:27:45.150 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:45.150 rmmod nvme_tcp 00:27:45.150 rmmod nvme_fabrics 00:27:45.150 rmmod nvme_keyring 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4132471 ']' 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4132471 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 4132471 ']' 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 4132471 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4132471 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4132471' 00:27:45.150 killing process with pid 4132471 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 4132471 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 4132471 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:45.150 20:33:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.450 20:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:48.450 20:33:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:48.450 00:27:48.450 real 0m45.098s 00:27:48.450 user 2m35.910s 00:27:48.450 sys 0m11.146s 00:27:48.450 20:33:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:48.450 20:33:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:48.450 ************************************ 00:27:48.450 END TEST nvmf_perf_adq 00:27:48.450 ************************************ 00:27:48.450 20:33:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:48.450 20:33:26 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:48.450 20:33:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:48.450 20:33:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:48.450 20:33:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:48.450 ************************************ 00:27:48.450 START TEST nvmf_shutdown 00:27:48.450 ************************************ 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:48.450 * Looking for test storage... 00:27:48.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:48.450 ************************************ 00:27:48.450 START TEST nvmf_shutdown_tc1 00:27:48.450 ************************************ 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:48.450 20:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:50.353 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:50.353 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:50.353 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:50.353 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:50.353 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:50.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:27:50.354 00:27:50.354 --- 10.0.0.2 ping statistics --- 00:27:50.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.354 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:27:50.354 00:27:50.354 --- 10.0.0.1 ping statistics --- 00:27:50.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.354 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=4135789 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 4135789 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 4135789 ']' 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:50.354 20:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.354 [2024-07-15 20:33:28.830141] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:27:50.354 [2024-07-15 20:33:28.830228] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.354 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.612 [2024-07-15 20:33:28.899566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:50.612 [2024-07-15 20:33:28.994827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.612 [2024-07-15 20:33:28.994908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.612 [2024-07-15 20:33:28.994926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.612 [2024-07-15 20:33:28.994939] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.612 [2024-07-15 20:33:28.994950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.612 [2024-07-15 20:33:28.995054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:50.612 [2024-07-15 20:33:28.998893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:50.612 [2024-07-15 20:33:28.998970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:50.612 [2024-07-15 20:33:28.998974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.612 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:50.612 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:50.612 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:50.612 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:50.612 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.872 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.872 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:50.872 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.872 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.872 [2024-07-15 20:33:29.164851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.872 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.872 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:50.872 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:50.872 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:50.872 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.872 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:50.872 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.872 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.873 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.873 Malloc1 00:27:50.873 [2024-07-15 20:33:29.252897] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.873 Malloc2 00:27:50.873 Malloc3 00:27:50.873 Malloc4 00:27:51.133 Malloc5 00:27:51.133 Malloc6 00:27:51.133 Malloc7 00:27:51.133 Malloc8 00:27:51.133 Malloc9 00:27:51.393 Malloc10 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=4135966 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 4135966 /var/tmp/bdevperf.sock 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 4135966 ']' 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:51.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.393 { 00:27:51.393 "params": { 00:27:51.393 "name": "Nvme$subsystem", 00:27:51.393 "trtype": "$TEST_TRANSPORT", 00:27:51.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.393 "adrfam": "ipv4", 00:27:51.393 "trsvcid": "$NVMF_PORT", 00:27:51.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.393 "hdgst": ${hdgst:-false}, 00:27:51.393 "ddgst": ${ddgst:-false} 00:27:51.393 }, 00:27:51.393 "method": "bdev_nvme_attach_controller" 00:27:51.393 } 00:27:51.393 EOF 00:27:51.393 )") 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.393 { 00:27:51.393 "params": { 00:27:51.393 "name": "Nvme$subsystem", 00:27:51.393 "trtype": "$TEST_TRANSPORT", 00:27:51.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.393 "adrfam": "ipv4", 00:27:51.393 "trsvcid": "$NVMF_PORT", 00:27:51.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.393 "hdgst": ${hdgst:-false}, 00:27:51.393 "ddgst": ${ddgst:-false} 00:27:51.393 }, 00:27:51.393 "method": "bdev_nvme_attach_controller" 00:27:51.393 } 00:27:51.393 EOF 00:27:51.393 )") 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.393 { 00:27:51.393 "params": { 00:27:51.393 "name": "Nvme$subsystem", 00:27:51.393 "trtype": "$TEST_TRANSPORT", 00:27:51.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.393 "adrfam": "ipv4", 00:27:51.393 "trsvcid": "$NVMF_PORT", 00:27:51.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.393 "hdgst": ${hdgst:-false}, 00:27:51.393 "ddgst": ${ddgst:-false} 00:27:51.393 }, 00:27:51.393 "method": "bdev_nvme_attach_controller" 00:27:51.393 } 00:27:51.393 EOF 00:27:51.393 )") 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.393 { 00:27:51.393 "params": { 00:27:51.393 "name": "Nvme$subsystem", 00:27:51.393 "trtype": "$TEST_TRANSPORT", 00:27:51.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.393 "adrfam": "ipv4", 00:27:51.393 "trsvcid": "$NVMF_PORT", 00:27:51.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.393 "hdgst": ${hdgst:-false}, 00:27:51.393 "ddgst": ${ddgst:-false} 00:27:51.393 }, 00:27:51.393 "method": "bdev_nvme_attach_controller" 00:27:51.393 } 00:27:51.393 EOF 00:27:51.393 )") 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.393 { 00:27:51.393 "params": { 00:27:51.393 "name": "Nvme$subsystem", 00:27:51.393 "trtype": "$TEST_TRANSPORT", 00:27:51.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.393 "adrfam": "ipv4", 00:27:51.393 "trsvcid": "$NVMF_PORT", 00:27:51.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.393 "hdgst": ${hdgst:-false}, 00:27:51.393 "ddgst": ${ddgst:-false} 00:27:51.393 }, 00:27:51.393 "method": "bdev_nvme_attach_controller" 00:27:51.393 } 00:27:51.393 EOF 00:27:51.393 )") 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.393 { 00:27:51.393 "params": { 00:27:51.393 "name": "Nvme$subsystem", 00:27:51.393 "trtype": "$TEST_TRANSPORT", 00:27:51.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.393 "adrfam": "ipv4", 00:27:51.393 "trsvcid": "$NVMF_PORT", 00:27:51.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.393 "hdgst": ${hdgst:-false}, 00:27:51.393 "ddgst": ${ddgst:-false} 00:27:51.393 }, 00:27:51.393 "method": "bdev_nvme_attach_controller" 00:27:51.393 } 00:27:51.393 EOF 00:27:51.393 )") 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.393 { 00:27:51.393 "params": { 00:27:51.393 "name": "Nvme$subsystem", 00:27:51.393 "trtype": "$TEST_TRANSPORT", 00:27:51.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.393 "adrfam": "ipv4", 00:27:51.393 "trsvcid": "$NVMF_PORT", 00:27:51.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.393 "hdgst": ${hdgst:-false}, 00:27:51.393 "ddgst": ${ddgst:-false} 00:27:51.393 }, 00:27:51.393 "method": "bdev_nvme_attach_controller" 00:27:51.393 } 00:27:51.393 EOF 00:27:51.393 )") 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.393 { 00:27:51.393 "params": { 00:27:51.393 "name": "Nvme$subsystem", 00:27:51.393 "trtype": "$TEST_TRANSPORT", 00:27:51.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.393 "adrfam": "ipv4", 00:27:51.393 "trsvcid": "$NVMF_PORT", 00:27:51.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.393 "hdgst": ${hdgst:-false}, 00:27:51.393 "ddgst": ${ddgst:-false} 00:27:51.393 }, 00:27:51.393 "method": "bdev_nvme_attach_controller" 00:27:51.393 } 00:27:51.393 EOF 00:27:51.393 )") 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.393 { 00:27:51.393 "params": { 00:27:51.393 "name": "Nvme$subsystem", 00:27:51.393 "trtype": "$TEST_TRANSPORT", 00:27:51.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.393 "adrfam": "ipv4", 00:27:51.393 "trsvcid": "$NVMF_PORT", 00:27:51.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.393 "hdgst": ${hdgst:-false}, 00:27:51.393 "ddgst": ${ddgst:-false} 00:27:51.393 }, 00:27:51.393 "method": "bdev_nvme_attach_controller" 00:27:51.393 } 00:27:51.393 EOF 00:27:51.393 )") 00:27:51.393 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.394 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.394 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.394 { 00:27:51.394 "params": { 00:27:51.394 "name": "Nvme$subsystem", 00:27:51.394 "trtype": "$TEST_TRANSPORT", 00:27:51.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.394 "adrfam": "ipv4", 00:27:51.394 "trsvcid": "$NVMF_PORT", 00:27:51.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.394 "hdgst": ${hdgst:-false}, 00:27:51.394 "ddgst": ${ddgst:-false} 00:27:51.394 }, 00:27:51.394 "method": "bdev_nvme_attach_controller" 00:27:51.394 } 00:27:51.394 EOF 00:27:51.394 )") 00:27:51.394 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.394 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:51.394 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:51.394 20:33:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:51.394 "params": { 00:27:51.394 "name": "Nvme1", 00:27:51.394 "trtype": "tcp", 00:27:51.394 "traddr": "10.0.0.2", 00:27:51.394 "adrfam": "ipv4", 00:27:51.394 "trsvcid": "4420", 00:27:51.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:51.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:51.394 "hdgst": false, 00:27:51.394 "ddgst": false 00:27:51.394 }, 00:27:51.394 "method": "bdev_nvme_attach_controller" 00:27:51.394 },{ 00:27:51.394 "params": { 00:27:51.394 "name": "Nvme2", 00:27:51.394 "trtype": "tcp", 00:27:51.394 "traddr": "10.0.0.2", 00:27:51.394 "adrfam": "ipv4", 00:27:51.394 "trsvcid": "4420", 00:27:51.394 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:51.394 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:51.394 "hdgst": false, 00:27:51.394 "ddgst": false 00:27:51.394 }, 00:27:51.394 "method": "bdev_nvme_attach_controller" 00:27:51.394 },{ 00:27:51.394 "params": { 00:27:51.394 "name": "Nvme3", 00:27:51.394 "trtype": "tcp", 00:27:51.394 "traddr": "10.0.0.2", 00:27:51.394 "adrfam": "ipv4", 00:27:51.394 "trsvcid": "4420", 00:27:51.394 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:51.394 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:51.394 "hdgst": false, 00:27:51.394 "ddgst": false 00:27:51.394 }, 00:27:51.394 "method": "bdev_nvme_attach_controller" 00:27:51.394 },{ 00:27:51.394 "params": { 00:27:51.394 "name": "Nvme4", 00:27:51.394 "trtype": "tcp", 00:27:51.394 "traddr": "10.0.0.2", 00:27:51.394 "adrfam": "ipv4", 00:27:51.394 "trsvcid": "4420", 00:27:51.394 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:51.394 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:51.394 "hdgst": false, 00:27:51.394 "ddgst": false 00:27:51.394 }, 00:27:51.394 "method": "bdev_nvme_attach_controller" 00:27:51.394 },{ 00:27:51.394 "params": { 00:27:51.394 "name": "Nvme5", 00:27:51.394 "trtype": "tcp", 00:27:51.394 "traddr": "10.0.0.2", 00:27:51.394 "adrfam": "ipv4", 00:27:51.394 "trsvcid": "4420", 00:27:51.394 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:51.394 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:51.394 "hdgst": false, 00:27:51.394 "ddgst": false 00:27:51.394 }, 00:27:51.394 "method": "bdev_nvme_attach_controller" 00:27:51.394 },{ 00:27:51.394 "params": { 00:27:51.394 "name": "Nvme6", 00:27:51.394 "trtype": "tcp", 00:27:51.394 "traddr": "10.0.0.2", 00:27:51.394 "adrfam": "ipv4", 00:27:51.394 "trsvcid": "4420", 00:27:51.394 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:51.394 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:51.394 "hdgst": false, 00:27:51.394 "ddgst": false 00:27:51.394 }, 00:27:51.394 "method": "bdev_nvme_attach_controller" 00:27:51.394 },{ 00:27:51.394 "params": { 00:27:51.394 "name": "Nvme7", 00:27:51.394 "trtype": "tcp", 00:27:51.394 "traddr": "10.0.0.2", 00:27:51.394 "adrfam": "ipv4", 00:27:51.394 "trsvcid": "4420", 00:27:51.394 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:51.394 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:51.394 "hdgst": false, 00:27:51.394 "ddgst": false 00:27:51.394 }, 00:27:51.394 "method": "bdev_nvme_attach_controller" 00:27:51.394 },{ 00:27:51.394 "params": { 00:27:51.394 "name": "Nvme8", 00:27:51.394 "trtype": "tcp", 00:27:51.394 "traddr": "10.0.0.2", 00:27:51.394 "adrfam": "ipv4", 00:27:51.394 "trsvcid": "4420", 00:27:51.394 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:51.394 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:51.394 "hdgst": false, 00:27:51.394 "ddgst": false 00:27:51.394 }, 00:27:51.394 "method": "bdev_nvme_attach_controller" 00:27:51.394 },{ 00:27:51.394 "params": { 00:27:51.394 "name": "Nvme9", 00:27:51.394 "trtype": "tcp", 00:27:51.394 "traddr": "10.0.0.2", 00:27:51.394 "adrfam": "ipv4", 00:27:51.394 "trsvcid": "4420", 00:27:51.394 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:51.394 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:51.394 "hdgst": false, 00:27:51.394 "ddgst": false 00:27:51.394 }, 00:27:51.394 "method": "bdev_nvme_attach_controller" 00:27:51.394 },{ 00:27:51.394 "params": { 00:27:51.394 "name": "Nvme10", 00:27:51.394 "trtype": "tcp", 00:27:51.394 "traddr": "10.0.0.2", 00:27:51.394 "adrfam": "ipv4", 00:27:51.394 "trsvcid": "4420", 00:27:51.394 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:51.394 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:51.394 "hdgst": false, 00:27:51.394 "ddgst": false 00:27:51.394 }, 00:27:51.394 "method": "bdev_nvme_attach_controller" 00:27:51.394 }' 00:27:51.394 [2024-07-15 20:33:29.770704] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:27:51.394 [2024-07-15 20:33:29.770779] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:51.394 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.394 [2024-07-15 20:33:29.833623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.394 [2024-07-15 20:33:29.920110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.300 20:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:53.300 20:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:53.300 20:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:53.300 20:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.300 20:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:53.300 20:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.300 20:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 4135966 00:27:53.300 20:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:53.300 20:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:54.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4135966 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:54.678 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 4135789 00:27:54.678 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:54.678 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:54.678 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:54.678 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:54.678 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.678 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.678 { 00:27:54.678 "params": { 00:27:54.678 "name": "Nvme$subsystem", 00:27:54.678 "trtype": "$TEST_TRANSPORT", 00:27:54.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.678 "adrfam": "ipv4", 00:27:54.678 "trsvcid": "$NVMF_PORT", 00:27:54.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.678 "hdgst": ${hdgst:-false}, 00:27:54.678 "ddgst": ${ddgst:-false} 00:27:54.678 }, 00:27:54.678 "method": "bdev_nvme_attach_controller" 00:27:54.678 } 00:27:54.678 EOF 00:27:54.678 )") 00:27:54.678 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.678 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.678 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.678 { 00:27:54.678 "params": { 00:27:54.678 "name": "Nvme$subsystem", 00:27:54.678 "trtype": "$TEST_TRANSPORT", 00:27:54.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.678 "adrfam": "ipv4", 00:27:54.678 "trsvcid": "$NVMF_PORT", 00:27:54.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.678 "hdgst": ${hdgst:-false}, 00:27:54.678 "ddgst": ${ddgst:-false} 00:27:54.678 }, 00:27:54.678 "method": "bdev_nvme_attach_controller" 00:27:54.678 } 00:27:54.678 EOF 00:27:54.678 )") 00:27:54.678 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.679 { 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme$subsystem", 00:27:54.679 "trtype": "$TEST_TRANSPORT", 00:27:54.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "$NVMF_PORT", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.679 "hdgst": ${hdgst:-false}, 00:27:54.679 "ddgst": ${ddgst:-false} 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 } 00:27:54.679 EOF 00:27:54.679 )") 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.679 { 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme$subsystem", 00:27:54.679 "trtype": "$TEST_TRANSPORT", 00:27:54.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "$NVMF_PORT", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.679 "hdgst": ${hdgst:-false}, 00:27:54.679 "ddgst": ${ddgst:-false} 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 } 00:27:54.679 EOF 00:27:54.679 )") 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.679 { 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme$subsystem", 00:27:54.679 "trtype": "$TEST_TRANSPORT", 00:27:54.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "$NVMF_PORT", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.679 "hdgst": ${hdgst:-false}, 00:27:54.679 "ddgst": ${ddgst:-false} 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 } 00:27:54.679 EOF 00:27:54.679 )") 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.679 { 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme$subsystem", 00:27:54.679 "trtype": "$TEST_TRANSPORT", 00:27:54.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "$NVMF_PORT", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.679 "hdgst": ${hdgst:-false}, 00:27:54.679 "ddgst": ${ddgst:-false} 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 } 00:27:54.679 EOF 00:27:54.679 )") 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.679 { 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme$subsystem", 00:27:54.679 "trtype": "$TEST_TRANSPORT", 00:27:54.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "$NVMF_PORT", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.679 "hdgst": ${hdgst:-false}, 00:27:54.679 "ddgst": ${ddgst:-false} 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 } 00:27:54.679 EOF 00:27:54.679 )") 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.679 { 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme$subsystem", 00:27:54.679 "trtype": "$TEST_TRANSPORT", 00:27:54.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "$NVMF_PORT", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.679 "hdgst": ${hdgst:-false}, 00:27:54.679 "ddgst": ${ddgst:-false} 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 } 00:27:54.679 EOF 00:27:54.679 )") 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.679 { 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme$subsystem", 00:27:54.679 "trtype": "$TEST_TRANSPORT", 00:27:54.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "$NVMF_PORT", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.679 "hdgst": ${hdgst:-false}, 00:27:54.679 "ddgst": ${ddgst:-false} 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 } 00:27:54.679 EOF 00:27:54.679 )") 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.679 { 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme$subsystem", 00:27:54.679 "trtype": "$TEST_TRANSPORT", 00:27:54.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "$NVMF_PORT", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.679 "hdgst": ${hdgst:-false}, 00:27:54.679 "ddgst": ${ddgst:-false} 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 } 00:27:54.679 EOF 00:27:54.679 )") 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:54.679 20:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme1", 00:27:54.679 "trtype": "tcp", 00:27:54.679 "traddr": "10.0.0.2", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "4420", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:54.679 "hdgst": false, 00:27:54.679 "ddgst": false 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 },{ 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme2", 00:27:54.679 "trtype": "tcp", 00:27:54.679 "traddr": "10.0.0.2", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "4420", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:54.679 "hdgst": false, 00:27:54.679 "ddgst": false 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 },{ 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme3", 00:27:54.679 "trtype": "tcp", 00:27:54.679 "traddr": "10.0.0.2", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "4420", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:54.679 "hdgst": false, 00:27:54.679 "ddgst": false 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 },{ 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme4", 00:27:54.679 "trtype": "tcp", 00:27:54.679 "traddr": "10.0.0.2", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "4420", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:54.679 "hdgst": false, 00:27:54.679 "ddgst": false 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 },{ 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme5", 00:27:54.679 "trtype": "tcp", 00:27:54.679 "traddr": "10.0.0.2", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "4420", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:54.679 "hdgst": false, 00:27:54.679 "ddgst": false 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 },{ 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme6", 00:27:54.679 "trtype": "tcp", 00:27:54.679 "traddr": "10.0.0.2", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "4420", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:54.679 "hdgst": false, 00:27:54.679 "ddgst": false 00:27:54.679 }, 00:27:54.679 "method": "bdev_nvme_attach_controller" 00:27:54.679 },{ 00:27:54.679 "params": { 00:27:54.679 "name": "Nvme7", 00:27:54.679 "trtype": "tcp", 00:27:54.679 "traddr": "10.0.0.2", 00:27:54.679 "adrfam": "ipv4", 00:27:54.679 "trsvcid": "4420", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:54.679 "hdgst": false, 00:27:54.679 "ddgst": false 00:27:54.680 }, 00:27:54.680 "method": "bdev_nvme_attach_controller" 00:27:54.680 },{ 00:27:54.680 "params": { 00:27:54.680 "name": "Nvme8", 00:27:54.680 "trtype": "tcp", 00:27:54.680 "traddr": "10.0.0.2", 00:27:54.680 "adrfam": "ipv4", 00:27:54.680 "trsvcid": "4420", 00:27:54.680 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:54.680 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:54.680 "hdgst": false, 00:27:54.680 "ddgst": false 00:27:54.680 }, 00:27:54.680 "method": "bdev_nvme_attach_controller" 00:27:54.680 },{ 00:27:54.680 "params": { 00:27:54.680 "name": "Nvme9", 00:27:54.680 "trtype": "tcp", 00:27:54.680 "traddr": "10.0.0.2", 00:27:54.680 "adrfam": "ipv4", 00:27:54.680 "trsvcid": "4420", 00:27:54.680 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:54.680 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:54.680 "hdgst": false, 00:27:54.680 "ddgst": false 00:27:54.680 }, 00:27:54.680 "method": "bdev_nvme_attach_controller" 00:27:54.680 },{ 00:27:54.680 "params": { 00:27:54.680 "name": "Nvme10", 00:27:54.680 "trtype": "tcp", 00:27:54.680 "traddr": "10.0.0.2", 00:27:54.680 "adrfam": "ipv4", 00:27:54.680 "trsvcid": "4420", 00:27:54.680 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:54.680 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:54.680 "hdgst": false, 00:27:54.680 "ddgst": false 00:27:54.680 }, 00:27:54.680 "method": "bdev_nvme_attach_controller" 00:27:54.680 }' 00:27:54.680 [2024-07-15 20:33:32.818480] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:27:54.680 [2024-07-15 20:33:32.818555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4136387 ] 00:27:54.680 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.680 [2024-07-15 20:33:32.882060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.680 [2024-07-15 20:33:32.968455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.582 Running I/O for 1 seconds... 00:27:57.517 00:27:57.517 Latency(us) 00:27:57.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.517 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.517 Verification LBA range: start 0x0 length 0x400 00:27:57.517 Nvme1n1 : 1.06 240.73 15.05 0.00 0.00 263164.97 17379.18 256318.58 00:27:57.517 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.517 Verification LBA range: start 0x0 length 0x400 00:27:57.517 Nvme2n1 : 1.07 238.48 14.91 0.00 0.00 260905.91 18058.81 250104.79 00:27:57.517 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.517 Verification LBA range: start 0x0 length 0x400 00:27:57.517 Nvme3n1 : 1.15 222.72 13.92 0.00 0.00 275412.95 22719.15 270299.59 00:27:57.517 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.517 Verification LBA range: start 0x0 length 0x400 00:27:57.518 Nvme4n1 : 1.11 230.21 14.39 0.00 0.00 261648.31 22233.69 246997.90 00:27:57.518 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.518 Verification LBA range: start 0x0 length 0x400 00:27:57.518 Nvme5n1 : 1.08 236.84 14.80 0.00 0.00 249001.91 35923.44 234570.33 00:27:57.518 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.518 Verification LBA range: start 0x0 length 0x400 00:27:57.518 Nvme6n1 : 1.16 275.06 17.19 0.00 0.00 211368.32 7136.14 245444.46 00:27:57.518 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.518 Verification LBA range: start 0x0 length 0x400 00:27:57.518 Nvme7n1 : 1.15 221.65 13.85 0.00 0.00 258555.83 22816.24 282727.16 00:27:57.518 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.518 Verification LBA range: start 0x0 length 0x400 00:27:57.518 Nvme8n1 : 1.17 218.79 13.67 0.00 0.00 257926.83 23107.51 256318.58 00:27:57.518 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.518 Verification LBA range: start 0x0 length 0x400 00:27:57.518 Nvme9n1 : 1.18 271.82 16.99 0.00 0.00 204212.15 17573.36 228356.55 00:27:57.518 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.518 Verification LBA range: start 0x0 length 0x400 00:27:57.518 Nvme10n1 : 1.17 219.33 13.71 0.00 0.00 248471.32 22233.69 285834.05 00:27:57.518 =================================================================================================================== 00:27:57.518 Total : 2375.62 148.48 0.00 0.00 247101.30 7136.14 285834.05 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:57.775 rmmod nvme_tcp 00:27:57.775 rmmod nvme_fabrics 00:27:57.775 rmmod nvme_keyring 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 4135789 ']' 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 4135789 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 4135789 ']' 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 4135789 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4135789 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4135789' 00:27:57.775 killing process with pid 4135789 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 4135789 00:27:57.775 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 4135789 00:27:58.340 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:58.340 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:58.340 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:58.340 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:58.340 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:58.340 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.340 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.340 20:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.244 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:00.244 00:28:00.244 real 0m12.044s 00:28:00.244 user 0m35.407s 00:28:00.244 sys 0m3.300s 00:28:00.244 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:00.244 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:00.244 ************************************ 00:28:00.244 END TEST nvmf_shutdown_tc1 00:28:00.244 ************************************ 00:28:00.244 20:33:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:00.244 20:33:38 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:00.244 20:33:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:00.244 20:33:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.244 20:33:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:00.505 ************************************ 00:28:00.505 START TEST nvmf_shutdown_tc2 00:28:00.505 ************************************ 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:00.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:00.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:00.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:00.505 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.505 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:00.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:28:00.506 00:28:00.506 --- 10.0.0.2 ping statistics --- 00:28:00.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.506 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:28:00.506 00:28:00.506 --- 10.0.0.1 ping statistics --- 00:28:00.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.506 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4137147 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4137147 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4137147 ']' 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:00.506 20:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.506 [2024-07-15 20:33:39.030194] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:28:00.506 [2024-07-15 20:33:39.030284] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.767 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.767 [2024-07-15 20:33:39.104127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:00.768 [2024-07-15 20:33:39.192769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.768 [2024-07-15 20:33:39.192828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.768 [2024-07-15 20:33:39.192856] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.768 [2024-07-15 20:33:39.192867] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.768 [2024-07-15 20:33:39.192883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.768 [2024-07-15 20:33:39.196896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.768 [2024-07-15 20:33:39.197109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:00.768 [2024-07-15 20:33:39.197152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:00.768 [2024-07-15 20:33:39.197155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.026 [2024-07-15 20:33:39.350944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:01.026 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.027 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:01.027 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.027 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:01.027 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.027 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:01.027 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.027 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:01.027 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:01.027 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.027 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.027 Malloc1 00:28:01.027 [2024-07-15 20:33:39.440440] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.027 Malloc2 00:28:01.027 Malloc3 00:28:01.316 Malloc4 00:28:01.316 Malloc5 00:28:01.316 Malloc6 00:28:01.316 Malloc7 00:28:01.316 Malloc8 00:28:01.316 Malloc9 00:28:01.575 Malloc10 00:28:01.575 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.575 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:01.575 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:01.575 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.575 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=4137327 00:28:01.575 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 4137327 /var/tmp/bdevperf.sock 00:28:01.575 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4137327 ']' 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:01.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.576 { 00:28:01.576 "params": { 00:28:01.576 "name": "Nvme$subsystem", 00:28:01.576 "trtype": "$TEST_TRANSPORT", 00:28:01.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.576 "adrfam": "ipv4", 00:28:01.576 "trsvcid": "$NVMF_PORT", 00:28:01.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.576 "hdgst": ${hdgst:-false}, 00:28:01.576 "ddgst": ${ddgst:-false} 00:28:01.576 }, 00:28:01.576 "method": "bdev_nvme_attach_controller" 00:28:01.576 } 00:28:01.576 EOF 00:28:01.576 )") 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.576 { 00:28:01.576 "params": { 00:28:01.576 "name": "Nvme$subsystem", 00:28:01.576 "trtype": "$TEST_TRANSPORT", 00:28:01.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.576 "adrfam": "ipv4", 00:28:01.576 "trsvcid": "$NVMF_PORT", 00:28:01.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.576 "hdgst": ${hdgst:-false}, 00:28:01.576 "ddgst": ${ddgst:-false} 00:28:01.576 }, 00:28:01.576 "method": "bdev_nvme_attach_controller" 00:28:01.576 } 00:28:01.576 EOF 00:28:01.576 )") 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.576 { 00:28:01.576 "params": { 00:28:01.576 "name": "Nvme$subsystem", 00:28:01.576 "trtype": "$TEST_TRANSPORT", 00:28:01.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.576 "adrfam": "ipv4", 00:28:01.576 "trsvcid": "$NVMF_PORT", 00:28:01.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.576 "hdgst": ${hdgst:-false}, 00:28:01.576 "ddgst": ${ddgst:-false} 00:28:01.576 }, 00:28:01.576 "method": "bdev_nvme_attach_controller" 00:28:01.576 } 00:28:01.576 EOF 00:28:01.576 )") 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.576 { 00:28:01.576 "params": { 00:28:01.576 "name": "Nvme$subsystem", 00:28:01.576 "trtype": "$TEST_TRANSPORT", 00:28:01.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.576 "adrfam": "ipv4", 00:28:01.576 "trsvcid": "$NVMF_PORT", 00:28:01.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.576 "hdgst": ${hdgst:-false}, 00:28:01.576 "ddgst": ${ddgst:-false} 00:28:01.576 }, 00:28:01.576 "method": "bdev_nvme_attach_controller" 00:28:01.576 } 00:28:01.576 EOF 00:28:01.576 )") 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.576 { 00:28:01.576 "params": { 00:28:01.576 "name": "Nvme$subsystem", 00:28:01.576 "trtype": "$TEST_TRANSPORT", 00:28:01.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.576 "adrfam": "ipv4", 00:28:01.576 "trsvcid": "$NVMF_PORT", 00:28:01.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.576 "hdgst": ${hdgst:-false}, 00:28:01.576 "ddgst": ${ddgst:-false} 00:28:01.576 }, 00:28:01.576 "method": "bdev_nvme_attach_controller" 00:28:01.576 } 00:28:01.576 EOF 00:28:01.576 )") 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.576 { 00:28:01.576 "params": { 00:28:01.576 "name": "Nvme$subsystem", 00:28:01.576 "trtype": "$TEST_TRANSPORT", 00:28:01.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.576 "adrfam": "ipv4", 00:28:01.576 "trsvcid": "$NVMF_PORT", 00:28:01.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.576 "hdgst": ${hdgst:-false}, 00:28:01.576 "ddgst": ${ddgst:-false} 00:28:01.576 }, 00:28:01.576 "method": "bdev_nvme_attach_controller" 00:28:01.576 } 00:28:01.576 EOF 00:28:01.576 )") 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.576 { 00:28:01.576 "params": { 00:28:01.576 "name": "Nvme$subsystem", 00:28:01.576 "trtype": "$TEST_TRANSPORT", 00:28:01.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.576 "adrfam": "ipv4", 00:28:01.576 "trsvcid": "$NVMF_PORT", 00:28:01.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.576 "hdgst": ${hdgst:-false}, 00:28:01.576 "ddgst": ${ddgst:-false} 00:28:01.576 }, 00:28:01.576 "method": "bdev_nvme_attach_controller" 00:28:01.576 } 00:28:01.576 EOF 00:28:01.576 )") 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.576 { 00:28:01.576 "params": { 00:28:01.576 "name": "Nvme$subsystem", 00:28:01.576 "trtype": "$TEST_TRANSPORT", 00:28:01.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.576 "adrfam": "ipv4", 00:28:01.576 "trsvcid": "$NVMF_PORT", 00:28:01.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.576 "hdgst": ${hdgst:-false}, 00:28:01.576 "ddgst": ${ddgst:-false} 00:28:01.576 }, 00:28:01.576 "method": "bdev_nvme_attach_controller" 00:28:01.576 } 00:28:01.576 EOF 00:28:01.576 )") 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.576 { 00:28:01.576 "params": { 00:28:01.576 "name": "Nvme$subsystem", 00:28:01.576 "trtype": "$TEST_TRANSPORT", 00:28:01.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.576 "adrfam": "ipv4", 00:28:01.576 "trsvcid": "$NVMF_PORT", 00:28:01.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.576 "hdgst": ${hdgst:-false}, 00:28:01.576 "ddgst": ${ddgst:-false} 00:28:01.576 }, 00:28:01.576 "method": "bdev_nvme_attach_controller" 00:28:01.576 } 00:28:01.576 EOF 00:28:01.576 )") 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.576 { 00:28:01.576 "params": { 00:28:01.576 "name": "Nvme$subsystem", 00:28:01.576 "trtype": "$TEST_TRANSPORT", 00:28:01.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.576 "adrfam": "ipv4", 00:28:01.576 "trsvcid": "$NVMF_PORT", 00:28:01.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.576 "hdgst": ${hdgst:-false}, 00:28:01.576 "ddgst": ${ddgst:-false} 00:28:01.576 }, 00:28:01.576 "method": "bdev_nvme_attach_controller" 00:28:01.576 } 00:28:01.576 EOF 00:28:01.576 )") 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:01.576 20:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:01.576 "params": { 00:28:01.576 "name": "Nvme1", 00:28:01.576 "trtype": "tcp", 00:28:01.576 "traddr": "10.0.0.2", 00:28:01.576 "adrfam": "ipv4", 00:28:01.576 "trsvcid": "4420", 00:28:01.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:01.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:01.576 "hdgst": false, 00:28:01.576 "ddgst": false 00:28:01.577 }, 00:28:01.577 "method": "bdev_nvme_attach_controller" 00:28:01.577 },{ 00:28:01.577 "params": { 00:28:01.577 "name": "Nvme2", 00:28:01.577 "trtype": "tcp", 00:28:01.577 "traddr": "10.0.0.2", 00:28:01.577 "adrfam": "ipv4", 00:28:01.577 "trsvcid": "4420", 00:28:01.577 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:01.577 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:01.577 "hdgst": false, 00:28:01.577 "ddgst": false 00:28:01.577 }, 00:28:01.577 "method": "bdev_nvme_attach_controller" 00:28:01.577 },{ 00:28:01.577 "params": { 00:28:01.577 "name": "Nvme3", 00:28:01.577 "trtype": "tcp", 00:28:01.577 "traddr": "10.0.0.2", 00:28:01.577 "adrfam": "ipv4", 00:28:01.577 "trsvcid": "4420", 00:28:01.577 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:01.577 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:01.577 "hdgst": false, 00:28:01.577 "ddgst": false 00:28:01.577 }, 00:28:01.577 "method": "bdev_nvme_attach_controller" 00:28:01.577 },{ 00:28:01.577 "params": { 00:28:01.577 "name": "Nvme4", 00:28:01.577 "trtype": "tcp", 00:28:01.577 "traddr": "10.0.0.2", 00:28:01.577 "adrfam": "ipv4", 00:28:01.577 "trsvcid": "4420", 00:28:01.577 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:01.577 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:01.577 "hdgst": false, 00:28:01.577 "ddgst": false 00:28:01.577 }, 00:28:01.577 "method": "bdev_nvme_attach_controller" 00:28:01.577 },{ 00:28:01.577 "params": { 00:28:01.577 "name": "Nvme5", 00:28:01.577 "trtype": "tcp", 00:28:01.577 "traddr": "10.0.0.2", 00:28:01.577 "adrfam": "ipv4", 00:28:01.577 "trsvcid": "4420", 00:28:01.577 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:01.577 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:01.577 "hdgst": false, 00:28:01.577 "ddgst": false 00:28:01.577 }, 00:28:01.577 "method": "bdev_nvme_attach_controller" 00:28:01.577 },{ 00:28:01.577 "params": { 00:28:01.577 "name": "Nvme6", 00:28:01.577 "trtype": "tcp", 00:28:01.577 "traddr": "10.0.0.2", 00:28:01.577 "adrfam": "ipv4", 00:28:01.577 "trsvcid": "4420", 00:28:01.577 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:01.577 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:01.577 "hdgst": false, 00:28:01.577 "ddgst": false 00:28:01.577 }, 00:28:01.577 "method": "bdev_nvme_attach_controller" 00:28:01.577 },{ 00:28:01.577 "params": { 00:28:01.577 "name": "Nvme7", 00:28:01.577 "trtype": "tcp", 00:28:01.577 "traddr": "10.0.0.2", 00:28:01.577 "adrfam": "ipv4", 00:28:01.577 "trsvcid": "4420", 00:28:01.577 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:01.577 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:01.577 "hdgst": false, 00:28:01.577 "ddgst": false 00:28:01.577 }, 00:28:01.577 "method": "bdev_nvme_attach_controller" 00:28:01.577 },{ 00:28:01.577 "params": { 00:28:01.577 "name": "Nvme8", 00:28:01.577 "trtype": "tcp", 00:28:01.577 "traddr": "10.0.0.2", 00:28:01.577 "adrfam": "ipv4", 00:28:01.577 "trsvcid": "4420", 00:28:01.577 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:01.577 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:01.577 "hdgst": false, 00:28:01.577 "ddgst": false 00:28:01.577 }, 00:28:01.577 "method": "bdev_nvme_attach_controller" 00:28:01.577 },{ 00:28:01.577 "params": { 00:28:01.577 "name": "Nvme9", 00:28:01.577 "trtype": "tcp", 00:28:01.577 "traddr": "10.0.0.2", 00:28:01.577 "adrfam": "ipv4", 00:28:01.577 "trsvcid": "4420", 00:28:01.577 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:01.577 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:01.577 "hdgst": false, 00:28:01.577 "ddgst": false 00:28:01.577 }, 00:28:01.577 "method": "bdev_nvme_attach_controller" 00:28:01.577 },{ 00:28:01.577 "params": { 00:28:01.577 "name": "Nvme10", 00:28:01.577 "trtype": "tcp", 00:28:01.577 "traddr": "10.0.0.2", 00:28:01.577 "adrfam": "ipv4", 00:28:01.577 "trsvcid": "4420", 00:28:01.577 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:01.577 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:01.577 "hdgst": false, 00:28:01.577 "ddgst": false 00:28:01.577 }, 00:28:01.577 "method": "bdev_nvme_attach_controller" 00:28:01.577 }' 00:28:01.577 [2024-07-15 20:33:39.957061] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:28:01.577 [2024-07-15 20:33:39.957142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4137327 ] 00:28:01.577 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.577 [2024-07-15 20:33:40.021641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.835 [2024-07-15 20:33:40.111932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.214 Running I/O for 10 seconds... 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.473 20:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.731 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:03.732 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:03.732 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:03.990 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:03.990 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:03.990 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:03.990 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:03.990 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.990 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.990 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.990 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:03.990 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:03.990 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 4137327 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 4137327 ']' 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 4137327 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4137327 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4137327' 00:28:04.251 killing process with pid 4137327 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 4137327 00:28:04.251 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 4137327 00:28:04.251 Received shutdown signal, test time was about 0.937654 seconds 00:28:04.251 00:28:04.251 Latency(us) 00:28:04.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.251 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.251 Verification LBA range: start 0x0 length 0x400 00:28:04.251 Nvme1n1 : 0.92 209.75 13.11 0.00 0.00 301445.37 20486.07 262532.36 00:28:04.251 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.251 Verification LBA range: start 0x0 length 0x400 00:28:04.251 Nvme2n1 : 0.90 214.04 13.38 0.00 0.00 289182.15 26796.94 260978.92 00:28:04.251 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.251 Verification LBA range: start 0x0 length 0x400 00:28:04.251 Nvme3n1 : 0.91 210.50 13.16 0.00 0.00 288012.52 20971.52 264085.81 00:28:04.251 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.251 Verification LBA range: start 0x0 length 0x400 00:28:04.251 Nvme4n1 : 0.94 273.27 17.08 0.00 0.00 217530.41 19126.80 239230.67 00:28:04.251 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.251 Verification LBA range: start 0x0 length 0x400 00:28:04.251 Nvme5n1 : 0.89 215.61 13.48 0.00 0.00 268304.94 19612.25 259425.47 00:28:04.251 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.251 Verification LBA range: start 0x0 length 0x400 00:28:04.251 Nvme6n1 : 0.88 217.95 13.62 0.00 0.00 259264.16 24660.95 295154.73 00:28:04.251 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.251 Verification LBA range: start 0x0 length 0x400 00:28:04.251 Nvme7n1 : 0.93 274.55 17.16 0.00 0.00 202590.63 19612.25 248551.35 00:28:04.251 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.251 Verification LBA range: start 0x0 length 0x400 00:28:04.251 Nvme8n1 : 0.90 213.40 13.34 0.00 0.00 253421.80 16699.54 259425.47 00:28:04.251 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.251 Verification LBA range: start 0x0 length 0x400 00:28:04.251 Nvme9n1 : 0.92 207.66 12.98 0.00 0.00 256170.41 20000.62 295154.73 00:28:04.251 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.251 Verification LBA range: start 0x0 length 0x400 00:28:04.251 Nvme10n1 : 0.93 207.11 12.94 0.00 0.00 251233.98 23690.05 274959.93 00:28:04.251 =================================================================================================================== 00:28:04.251 Total : 2243.84 140.24 0.00 0.00 255674.69 16699.54 295154.73 00:28:04.511 20:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:05.449 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 4137147 00:28:05.449 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:05.449 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:05.449 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:05.449 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:05.449 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:05.449 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:05.449 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:05.449 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:05.449 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:05.449 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:05.449 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:05.449 rmmod nvme_tcp 00:28:05.449 rmmod nvme_fabrics 00:28:05.449 rmmod nvme_keyring 00:28:05.449 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.707 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:05.707 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:05.707 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 4137147 ']' 00:28:05.707 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 4137147 00:28:05.707 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 4137147 ']' 00:28:05.707 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 4137147 00:28:05.707 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:05.707 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.707 20:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4137147 00:28:05.707 20:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:05.707 20:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:05.707 20:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4137147' 00:28:05.707 killing process with pid 4137147 00:28:05.707 20:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 4137147 00:28:05.707 20:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 4137147 00:28:05.964 20:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:05.964 20:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:05.964 20:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:05.964 20:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.964 20:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.964 20:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.964 20:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.964 20:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:08.507 00:28:08.507 real 0m7.713s 00:28:08.507 user 0m23.344s 00:28:08.507 sys 0m1.580s 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.507 ************************************ 00:28:08.507 END TEST nvmf_shutdown_tc2 00:28:08.507 ************************************ 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:08.507 ************************************ 00:28:08.507 START TEST nvmf_shutdown_tc3 00:28:08.507 ************************************ 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:08.507 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:08.508 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:08.508 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:08.508 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:08.508 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.508 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:08.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:28:08.509 00:28:08.509 --- 10.0.0.2 ping statistics --- 00:28:08.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.509 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:28:08.509 00:28:08.509 --- 10.0.0.1 ping statistics --- 00:28:08.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.509 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=4138242 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 4138242 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 4138242 ']' 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:08.509 20:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.509 [2024-07-15 20:33:46.792366] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:28:08.509 [2024-07-15 20:33:46.792462] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.509 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.509 [2024-07-15 20:33:46.863589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:08.509 [2024-07-15 20:33:46.952744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.509 [2024-07-15 20:33:46.952801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.509 [2024-07-15 20:33:46.952829] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.509 [2024-07-15 20:33:46.952840] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.509 [2024-07-15 20:33:46.952849] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.509 [2024-07-15 20:33:46.956897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.509 [2024-07-15 20:33:46.956966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:08.509 [2024-07-15 20:33:46.957030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:08.509 [2024-07-15 20:33:46.957033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.770 [2024-07-15 20:33:47.114724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.770 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.770 Malloc1 00:28:08.770 [2024-07-15 20:33:47.201997] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.770 Malloc2 00:28:08.770 Malloc3 00:28:09.028 Malloc4 00:28:09.028 Malloc5 00:28:09.028 Malloc6 00:28:09.028 Malloc7 00:28:09.028 Malloc8 00:28:09.286 Malloc9 00:28:09.286 Malloc10 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=4138416 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 4138416 /var/tmp/bdevperf.sock 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 4138416 ']' 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:09.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.286 { 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme$subsystem", 00:28:09.286 "trtype": "$TEST_TRANSPORT", 00:28:09.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "$NVMF_PORT", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.286 "hdgst": ${hdgst:-false}, 00:28:09.286 "ddgst": ${ddgst:-false} 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 } 00:28:09.286 EOF 00:28:09.286 )") 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.286 { 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme$subsystem", 00:28:09.286 "trtype": "$TEST_TRANSPORT", 00:28:09.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "$NVMF_PORT", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.286 "hdgst": ${hdgst:-false}, 00:28:09.286 "ddgst": ${ddgst:-false} 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 } 00:28:09.286 EOF 00:28:09.286 )") 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.286 { 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme$subsystem", 00:28:09.286 "trtype": "$TEST_TRANSPORT", 00:28:09.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "$NVMF_PORT", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.286 "hdgst": ${hdgst:-false}, 00:28:09.286 "ddgst": ${ddgst:-false} 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 } 00:28:09.286 EOF 00:28:09.286 )") 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.286 { 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme$subsystem", 00:28:09.286 "trtype": "$TEST_TRANSPORT", 00:28:09.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "$NVMF_PORT", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.286 "hdgst": ${hdgst:-false}, 00:28:09.286 "ddgst": ${ddgst:-false} 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 } 00:28:09.286 EOF 00:28:09.286 )") 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.286 { 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme$subsystem", 00:28:09.286 "trtype": "$TEST_TRANSPORT", 00:28:09.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "$NVMF_PORT", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.286 "hdgst": ${hdgst:-false}, 00:28:09.286 "ddgst": ${ddgst:-false} 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 } 00:28:09.286 EOF 00:28:09.286 )") 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.286 { 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme$subsystem", 00:28:09.286 "trtype": "$TEST_TRANSPORT", 00:28:09.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "$NVMF_PORT", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.286 "hdgst": ${hdgst:-false}, 00:28:09.286 "ddgst": ${ddgst:-false} 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 } 00:28:09.286 EOF 00:28:09.286 )") 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.286 { 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme$subsystem", 00:28:09.286 "trtype": "$TEST_TRANSPORT", 00:28:09.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "$NVMF_PORT", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.286 "hdgst": ${hdgst:-false}, 00:28:09.286 "ddgst": ${ddgst:-false} 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 } 00:28:09.286 EOF 00:28:09.286 )") 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.286 { 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme$subsystem", 00:28:09.286 "trtype": "$TEST_TRANSPORT", 00:28:09.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "$NVMF_PORT", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.286 "hdgst": ${hdgst:-false}, 00:28:09.286 "ddgst": ${ddgst:-false} 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 } 00:28:09.286 EOF 00:28:09.286 )") 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.286 { 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme$subsystem", 00:28:09.286 "trtype": "$TEST_TRANSPORT", 00:28:09.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "$NVMF_PORT", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.286 "hdgst": ${hdgst:-false}, 00:28:09.286 "ddgst": ${ddgst:-false} 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 } 00:28:09.286 EOF 00:28:09.286 )") 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.286 { 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme$subsystem", 00:28:09.286 "trtype": "$TEST_TRANSPORT", 00:28:09.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "$NVMF_PORT", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.286 "hdgst": ${hdgst:-false}, 00:28:09.286 "ddgst": ${ddgst:-false} 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 } 00:28:09.286 EOF 00:28:09.286 )") 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:09.286 20:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme1", 00:28:09.286 "trtype": "tcp", 00:28:09.286 "traddr": "10.0.0.2", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "4420", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:09.286 "hdgst": false, 00:28:09.286 "ddgst": false 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 },{ 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme2", 00:28:09.286 "trtype": "tcp", 00:28:09.286 "traddr": "10.0.0.2", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "4420", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:09.286 "hdgst": false, 00:28:09.286 "ddgst": false 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 },{ 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme3", 00:28:09.286 "trtype": "tcp", 00:28:09.286 "traddr": "10.0.0.2", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "4420", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:09.286 "hdgst": false, 00:28:09.286 "ddgst": false 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 },{ 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme4", 00:28:09.286 "trtype": "tcp", 00:28:09.286 "traddr": "10.0.0.2", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "4420", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:09.286 "hdgst": false, 00:28:09.286 "ddgst": false 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.286 },{ 00:28:09.286 "params": { 00:28:09.286 "name": "Nvme5", 00:28:09.286 "trtype": "tcp", 00:28:09.286 "traddr": "10.0.0.2", 00:28:09.286 "adrfam": "ipv4", 00:28:09.286 "trsvcid": "4420", 00:28:09.286 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:09.286 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:09.286 "hdgst": false, 00:28:09.286 "ddgst": false 00:28:09.286 }, 00:28:09.286 "method": "bdev_nvme_attach_controller" 00:28:09.287 },{ 00:28:09.287 "params": { 00:28:09.287 "name": "Nvme6", 00:28:09.287 "trtype": "tcp", 00:28:09.287 "traddr": "10.0.0.2", 00:28:09.287 "adrfam": "ipv4", 00:28:09.287 "trsvcid": "4420", 00:28:09.287 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:09.287 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:09.287 "hdgst": false, 00:28:09.287 "ddgst": false 00:28:09.287 }, 00:28:09.287 "method": "bdev_nvme_attach_controller" 00:28:09.287 },{ 00:28:09.287 "params": { 00:28:09.287 "name": "Nvme7", 00:28:09.287 "trtype": "tcp", 00:28:09.287 "traddr": "10.0.0.2", 00:28:09.287 "adrfam": "ipv4", 00:28:09.287 "trsvcid": "4420", 00:28:09.287 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:09.287 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:09.287 "hdgst": false, 00:28:09.287 "ddgst": false 00:28:09.287 }, 00:28:09.287 "method": "bdev_nvme_attach_controller" 00:28:09.287 },{ 00:28:09.287 "params": { 00:28:09.287 "name": "Nvme8", 00:28:09.287 "trtype": "tcp", 00:28:09.287 "traddr": "10.0.0.2", 00:28:09.287 "adrfam": "ipv4", 00:28:09.287 "trsvcid": "4420", 00:28:09.287 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:09.287 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:09.287 "hdgst": false, 00:28:09.287 "ddgst": false 00:28:09.287 }, 00:28:09.287 "method": "bdev_nvme_attach_controller" 00:28:09.287 },{ 00:28:09.287 "params": { 00:28:09.287 "name": "Nvme9", 00:28:09.287 "trtype": "tcp", 00:28:09.287 "traddr": "10.0.0.2", 00:28:09.287 "adrfam": "ipv4", 00:28:09.287 "trsvcid": "4420", 00:28:09.287 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:09.287 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:09.287 "hdgst": false, 00:28:09.287 "ddgst": false 00:28:09.287 }, 00:28:09.287 "method": "bdev_nvme_attach_controller" 00:28:09.287 },{ 00:28:09.287 "params": { 00:28:09.287 "name": "Nvme10", 00:28:09.287 "trtype": "tcp", 00:28:09.287 "traddr": "10.0.0.2", 00:28:09.287 "adrfam": "ipv4", 00:28:09.287 "trsvcid": "4420", 00:28:09.287 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:09.287 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:09.287 "hdgst": false, 00:28:09.287 "ddgst": false 00:28:09.287 }, 00:28:09.287 "method": "bdev_nvme_attach_controller" 00:28:09.287 }' 00:28:09.287 [2024-07-15 20:33:47.713754] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:28:09.287 [2024-07-15 20:33:47.713833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138416 ] 00:28:09.287 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.287 [2024-07-15 20:33:47.779014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.544 [2024-07-15 20:33:47.865428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.917 Running I/O for 10 seconds... 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:11.484 20:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=135 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 4138242 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 4138242 ']' 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 4138242 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4138242 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4138242' 00:28:11.757 killing process with pid 4138242 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 4138242 00:28:11.757 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 4138242 00:28:11.757 [2024-07-15 20:33:50.084532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.757 [2024-07-15 20:33:50.084870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.084892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.084905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.084920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.084933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.084947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.084960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.084972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.084986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.084999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.085453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600a90 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.087996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088033] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088104] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.758 [2024-07-15 20:33:50.088310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088368] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.088622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f30 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.090994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.759 [2024-07-15 20:33:50.091346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.091685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093425] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.760 [2024-07-15 20:33:50.093963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.093978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.093990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601890 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e0ee0 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a198c0 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a17b10 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.761 [2024-07-15 20:33:50.094892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with [2024-07-15 20:33:50.094896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:28:11.761 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-15 20:33:50.094915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:11.761 the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.761 [2024-07-15 20:33:50.094932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7950 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.094997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.761 [2024-07-15 20:33:50.095561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095642] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.095994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.096006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.096019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.096031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.096043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601d30 is same with the state(5) to be set 00:28:11.762 [2024-07-15 20:33:50.096267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.762 [2024-07-15 20:33:50.096292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.762 [2024-07-15 20:33:50.096319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.762 [2024-07-15 20:33:50.096335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.762 [2024-07-15 20:33:50.096351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.762 [2024-07-15 20:33:50.096365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.762 [2024-07-15 20:33:50.096381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.762 [2024-07-15 20:33:50.096400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.762 [2024-07-15 20:33:50.096416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.762 [2024-07-15 20:33:50.096430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.762 [2024-07-15 20:33:50.096445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.762 [2024-07-15 20:33:50.096459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.762 [2024-07-15 20:33:50.096474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.762 [2024-07-15 20:33:50.096488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.762 [2024-07-15 20:33:50.096503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.762 [2024-07-15 20:33:50.096516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.762 [2024-07-15 20:33:50.096531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.762 [2024-07-15 20:33:50.096549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.762 [2024-07-15 20:33:50.096565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.762 [2024-07-15 20:33:50.096579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.762 [2024-07-15 20:33:50.096594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.096608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.096623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.096636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.096652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.096665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.096680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.096693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.096708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.096721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.096736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.096750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.096765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.096779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.096794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.096807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.096822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.096836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.096851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.096867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16021f0 is same with the state(5) to be set 00:28:11.763 [2024-07-15 20:33:50.097166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.763 [2024-07-15 20:33:50.097891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.763 [2024-07-15 20:33:50.097907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.097921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.097936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.097950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.097964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.097978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.097993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.098006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.098022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.098035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.098050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.098063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.098078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.098091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.098106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.098120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.098135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.098148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.098163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.098176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.098195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.098210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.098225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:1[2024-07-15 20:33:50.098218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.098248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.098263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.098279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.098294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.098307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 [2024-07-15 20:33:50.098322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 [2024-07-15 20:33:50.098334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:1[2024-07-15 20:33:50.098347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.764 the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:33:50.098361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.764 the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with [2024-07-15 20:33:50.098378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f28c0 is same the state(5) to be set 00:28:11.764 with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.764 [2024-07-15 20:33:50.098794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.098808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.098820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.098832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.098843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.098883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.098906] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19f28c0 was disconnected and freed. reset controller. 00:28:11.765 [2024-07-15 20:33:50.098918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.098931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.098944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.098956] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.098968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.098981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.098993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.099006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.099018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.099030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.099042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.099055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.099067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.099079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.099091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.099103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce630 is same with the state(5) to be set 00:28:11.765 [2024-07-15 20:33:50.099342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.099984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.099997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.100013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.100027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.100043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.100056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.100071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.100086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.100101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.765 [2024-07-15 20:33:50.100115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.765 [2024-07-15 20:33:50.100132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with [2024-07-15 20:33:50.100700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:12the state(5) to be set 00:28:11.766 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12[2024-07-15 20:33:50.100739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:33:50.100752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 [2024-07-15 20:33:50.100805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 [2024-07-15 20:33:50.100818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:12[2024-07-15 20:33:50.100830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.766 the state(5) to be set 00:28:11.766 [2024-07-15 20:33:50.100845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:33:50.100844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.766 the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.100865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.100870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 [2024-07-15 20:33:50.100886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.100894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.767 [2024-07-15 20:33:50.100901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.100911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:12[2024-07-15 20:33:50.100913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.100927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:33:50.100927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.767 the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.100948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with [2024-07-15 20:33:50.100949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:12the state(5) to be set 00:28:11.767 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 [2024-07-15 20:33:50.100963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with [2024-07-15 20:33:50.100965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:11.767 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.767 [2024-07-15 20:33:50.100978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.100982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 [2024-07-15 20:33:50.100990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.100996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.767 [2024-07-15 20:33:50.101003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 [2024-07-15 20:33:50.101017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.767 [2024-07-15 20:33:50.101029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 [2024-07-15 20:33:50.101042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:33:50.101056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.767 the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 [2024-07-15 20:33:50.101081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.767 [2024-07-15 20:33:50.101094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 [2024-07-15 20:33:50.101106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.767 [2024-07-15 20:33:50.101118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:12[2024-07-15 20:33:50.101135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with [2024-07-15 20:33:50.101151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:11.767 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.767 [2024-07-15 20:33:50.101168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 [2024-07-15 20:33:50.101181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.767 [2024-07-15 20:33:50.101193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 [2024-07-15 20:33:50.101206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.767 [2024-07-15 20:33:50.101219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 [2024-07-15 20:33:50.101231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.767 [2024-07-15 20:33:50.101246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 [2024-07-15 20:33:50.101271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.767 [2024-07-15 20:33:50.101284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.767 [2024-07-15 20:33:50.101293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.767 [2024-07-15 20:33:50.101296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.101307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:33:50.101309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.768 the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.101326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.101327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.768 [2024-07-15 20:33:50.101338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cead0 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.101342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.768 [2024-07-15 20:33:50.101446] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19ed700 was disconnected and freed. reset controller. 00:28:11.768 [2024-07-15 20:33:50.102140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controll[2024-07-15 20:33:50.102866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with er 00:28:11.768 the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7950 (9): Bad file descriptor 00:28:11.768 [2024-07-15 20:33:50.102929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.102997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.103010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.103329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.103349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.103363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.103376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.103388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.103400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.103413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.103425] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.103437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.103449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.103461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.103473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cef70 is same with the state(5) to be set 00:28:11.768 [2024-07-15 20:33:50.104494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:11.769 [2024-07-15 20:33:50.104556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a393d0 (9): Bad file descriptor 00:28:11.769 [2024-07-15 20:33:50.104613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e0ee0 (9): Bad file descriptor 00:28:11.769 [2024-07-15 20:33:50.104648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a198c0 (9): Bad file descriptor 00:28:11.769 [2024-07-15 20:33:50.104678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a19370 (9): Bad file descriptor 00:28:11.769 [2024-07-15 20:33:50.104707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a17b10 (9): Bad file descriptor 00:28:11.769 [2024-07-15 20:33:50.104757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.104778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.104793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.104805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.104819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.104836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.104851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.104870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.104890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a40490 is same with the state(5) to be set 00:28:11.769 [2024-07-15 20:33:50.104943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.104964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.104979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.104991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.105006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.105018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.105032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.105044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.105056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150f610 is same with the state(5) to be set 00:28:11.769 [2024-07-15 20:33:50.105100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.105120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.105134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.105147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.105171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.105185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.105198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.105211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.105224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1babc40 is same with the state(5) to be set 00:28:11.769 [2024-07-15 20:33:50.105268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.105289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.105303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.105321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.105335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.105349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.105362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.769 [2024-07-15 20:33:50.105375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.105388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf030 is same with the state(5) to be set 00:28:11.769 [2024-07-15 20:33:50.106351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.769 [2024-07-15 20:33:50.106381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7950 with addr=10.0.0.2, port=4420 00:28:11.769 [2024-07-15 20:33:50.106398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7950 is same with the state(5) to be set 00:28:11.769 [2024-07-15 20:33:50.106468] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.769 [2024-07-15 20:33:50.106534] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.769 [2024-07-15 20:33:50.106598] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.769 [2024-07-15 20:33:50.106662] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.769 [2024-07-15 20:33:50.107348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.769 [2024-07-15 20:33:50.107377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a393d0 with addr=10.0.0.2, port=4420 00:28:11.769 [2024-07-15 20:33:50.107393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a393d0 is same with the state(5) to be set 00:28:11.769 [2024-07-15 20:33:50.107413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7950 (9): Bad file descriptor 00:28:11.769 [2024-07-15 20:33:50.107518] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.769 [2024-07-15 20:33:50.107589] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.769 [2024-07-15 20:33:50.107643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.769 [2024-07-15 20:33:50.107664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.107686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.769 [2024-07-15 20:33:50.107702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.107718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.769 [2024-07-15 20:33:50.107733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.107748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.769 [2024-07-15 20:33:50.107762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.107778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.769 [2024-07-15 20:33:50.107792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.107814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.769 [2024-07-15 20:33:50.107829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.107845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.769 [2024-07-15 20:33:50.107869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.107895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.769 [2024-07-15 20:33:50.107911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.107927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.769 [2024-07-15 20:33:50.107942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.107958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.769 [2024-07-15 20:33:50.107972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.769 [2024-07-15 20:33:50.107988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.108980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.108997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.109011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.109027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.109040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.109056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.109069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.109085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.109098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.770 [2024-07-15 20:33:50.109114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.770 [2024-07-15 20:33:50.109128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.109620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.109634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f0080 is same with the state(5) to be set 00:28:11.771 [2024-07-15 20:33:50.109706] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19f0080 was disconnected and freed. reset controller. 00:28:11.771 [2024-07-15 20:33:50.109817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a393d0 (9): Bad file descriptor 00:28:11.771 [2024-07-15 20:33:50.109842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:11.771 [2024-07-15 20:33:50.109856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:11.771 [2024-07-15 20:33:50.109890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:11.771 [2024-07-15 20:33:50.111192] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.771 [2024-07-15 20:33:50.111225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.771 [2024-07-15 20:33:50.111244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:11.771 [2024-07-15 20:33:50.111276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1babc40 (9): Bad file descriptor 00:28:11.771 [2024-07-15 20:33:50.111298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:11.771 [2024-07-15 20:33:50.111311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:11.771 [2024-07-15 20:33:50.111324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:11.771 [2024-07-15 20:33:50.111408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.771 [2024-07-15 20:33:50.111886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.771 [2024-07-15 20:33:50.111915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1babc40 with addr=10.0.0.2, port=4420 00:28:11.771 [2024-07-15 20:33:50.111931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1babc40 is same with the state(5) to be set 00:28:11.771 [2024-07-15 20:33:50.112000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1babc40 (9): Bad file descriptor 00:28:11.771 [2024-07-15 20:33:50.112069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:11.771 [2024-07-15 20:33:50.112087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:11.771 [2024-07-15 20:33:50.112101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:11.771 [2024-07-15 20:33:50.112156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.771 [2024-07-15 20:33:50.114564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a40490 (9): Bad file descriptor 00:28:11.771 [2024-07-15 20:33:50.114602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x150f610 (9): Bad file descriptor 00:28:11.771 [2024-07-15 20:33:50.114637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbf030 (9): Bad file descriptor 00:28:11.771 [2024-07-15 20:33:50.114783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.114807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.114832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.114847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.114873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.114895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.114912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.114926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.114943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.114957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.114973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.114992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.115008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.115022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.771 [2024-07-15 20:33:50.115038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.771 [2024-07-15 20:33:50.115052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.772 [2024-07-15 20:33:50.115823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.772 [2024-07-15 20:33:50.115839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.115852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.115872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.115894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.115911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.115925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.115941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.115955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.115971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.115984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.116732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.116747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e40e0 is same with the state(5) to be set 00:28:11.773 [2024-07-15 20:33:50.118030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.118054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.118075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.118091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.118107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.118120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.118136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.118150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.118180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.118195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.118210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.773 [2024-07-15 20:33:50.118224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.773 [2024-07-15 20:33:50.118239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.118975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.118991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.119005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.119020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.119033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.119049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.119063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.119078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.119092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.119107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.119120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.119136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.119149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.119172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.119186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.119201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.119214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.119229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.119243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.119259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.119272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.119288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.774 [2024-07-15 20:33:50.119306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.774 [2024-07-15 20:33:50.119322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.119968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.119982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e5280 is same with the state(5) to be set 00:28:11.775 [2024-07-15 20:33:50.121240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.121263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.121284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.121299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.121320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.121335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.121350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.121364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.121380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.121393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.121409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.121422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.121438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.121451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.121467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.121481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.121496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.121510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.121526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.121539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.121555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.121568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.121584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.121597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.775 [2024-07-15 20:33:50.121612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.775 [2024-07-15 20:33:50.121626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.121642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.121655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.121670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.121689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.121706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.121721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.121737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.121750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.121766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.121780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.121796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.121810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.121825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.121838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.121854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.121886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.121903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.121917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.121933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.121947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.121963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.121976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.121992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.776 [2024-07-15 20:33:50.122686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.776 [2024-07-15 20:33:50.122699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.122715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.122729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.122744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.122758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.122773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.122787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.122802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.122816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.122838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.122853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.122869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.122899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.122916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.122930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.122946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.122960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.122975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.122989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.123004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.123018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.123035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.123049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.123064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.123078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.123093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.123107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.123122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.123136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.123151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.123165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.123180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.123194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.123208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eac50 is same with the state(5) to be set 00:28:11.777 [2024-07-15 20:33:50.124461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.124485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.124506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.124521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.124538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.124552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.124568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.124581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.124597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.124611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.124626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.124640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.124656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.124670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.124686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.124699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.124715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.124729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.124745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.124758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.124774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.777 [2024-07-15 20:33:50.124787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.777 [2024-07-15 20:33:50.124803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.124817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.124832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.124846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.124867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.124889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.124907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.124922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.124938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.124952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.124968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.124981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.124997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.778 [2024-07-15 20:33:50.125854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.778 [2024-07-15 20:33:50.125868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.125896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.125912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.125928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.125942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.125957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.125970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.125986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.126395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.126409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eae30 is same with the state(5) to be set 00:28:11.779 [2024-07-15 20:33:50.127652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:11.779 [2024-07-15 20:33:50.127683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:11.779 [2024-07-15 20:33:50.127700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:11.779 [2024-07-15 20:33:50.127809] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.779 [2024-07-15 20:33:50.127855] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.779 [2024-07-15 20:33:50.127960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:11.779 [2024-07-15 20:33:50.127985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:11.779 [2024-07-15 20:33:50.128324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.779 [2024-07-15 20:33:50.128354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e0ee0 with addr=10.0.0.2, port=4420 00:28:11.779 [2024-07-15 20:33:50.128371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e0ee0 is same with the state(5) to be set 00:28:11.779 [2024-07-15 20:33:50.128513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.779 [2024-07-15 20:33:50.128538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a198c0 with addr=10.0.0.2, port=4420 00:28:11.779 [2024-07-15 20:33:50.128554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a198c0 is same with the state(5) to be set 00:28:11.779 [2024-07-15 20:33:50.128694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.779 [2024-07-15 20:33:50.128719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a19370 with addr=10.0.0.2, port=4420 00:28:11.779 [2024-07-15 20:33:50.128733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:28:11.779 [2024-07-15 20:33:50.129813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.129837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.129859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.129881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.129900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.129915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.129931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.129945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.129960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.129980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.129996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.130012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.130028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.130042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.130057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.130071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.130086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.130100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.130116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.130130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.130145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.130159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.130174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.130188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.130203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.779 [2024-07-15 20:33:50.130217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.779 [2024-07-15 20:33:50.130233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.130974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.130989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.131003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.131019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.131032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.131048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.131061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.131077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.131090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.131110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.131125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.131140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.131154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.131169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.131183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.131198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.131212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.131228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.131241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.131258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.131272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.780 [2024-07-15 20:33:50.131288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.780 [2024-07-15 20:33:50.131301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.131745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.131759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ec240 is same with the state(5) to be set 00:28:11.781 [2024-07-15 20:33:50.133007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.781 [2024-07-15 20:33:50.133577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.781 [2024-07-15 20:33:50.133593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.133607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.133622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.133636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.133652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.133666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.133682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.133696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.133711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.133725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.133741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.133755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.133771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.133784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.133800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.133814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.133829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.133847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.133863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.133884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.133902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.133917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.133932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.133946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.133961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.133974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.133990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.782 [2024-07-15 20:33:50.134412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.782 [2024-07-15 20:33:50.134425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.134931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.134945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eebc0 is same with the state(5) to be set 00:28:11.783 [2024-07-15 20:33:50.136172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.783 [2024-07-15 20:33:50.136708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.783 [2024-07-15 20:33:50.136722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.136737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.136751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.136766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.136779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.136795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.136810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.136825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.136839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.136855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.136869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.136892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.136907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.136923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.136937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.136952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.136966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.136985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.784 [2024-07-15 20:33:50.137777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.784 [2024-07-15 20:33:50.137792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.785 [2024-07-15 20:33:50.137806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.785 [2024-07-15 20:33:50.137821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.785 [2024-07-15 20:33:50.137835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.785 [2024-07-15 20:33:50.137850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.785 [2024-07-15 20:33:50.137864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.785 [2024-07-15 20:33:50.137886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.785 [2024-07-15 20:33:50.137902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.785 [2024-07-15 20:33:50.137917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.785 [2024-07-15 20:33:50.137931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.785 [2024-07-15 20:33:50.137946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.785 [2024-07-15 20:33:50.137959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.785 [2024-07-15 20:33:50.137975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.785 [2024-07-15 20:33:50.137988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.785 [2024-07-15 20:33:50.138004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.785 [2024-07-15 20:33:50.138018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.785 [2024-07-15 20:33:50.138034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.785 [2024-07-15 20:33:50.138047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.785 [2024-07-15 20:33:50.138063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.785 [2024-07-15 20:33:50.138077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.785 [2024-07-15 20:33:50.138092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.785 [2024-07-15 20:33:50.138109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.785 [2024-07-15 20:33:50.138124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f1400 is same with the state(5) to be set 00:28:11.785 [2024-07-15 20:33:50.139702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:11.785 [2024-07-15 20:33:50.139736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:11.785 [2024-07-15 20:33:50.139756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:11.785 [2024-07-15 20:33:50.139773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:11.785 task offset: 16384 on job bdev=Nvme10n1 fails 00:28:11.785 00:28:11.785 Latency(us) 00:28:11.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.785 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.785 Job: Nvme1n1 ended in about 0.74 seconds with error 00:28:11.785 Verification LBA range: start 0x0 length 0x400 00:28:11.785 Nvme1n1 : 0.74 178.26 11.14 86.43 0.00 238523.30 21262.79 251658.24 00:28:11.785 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.785 Job: Nvme2n1 ended in about 0.74 seconds with error 00:28:11.785 Verification LBA range: start 0x0 length 0x400 00:28:11.785 Nvme2n1 : 0.74 172.11 10.76 86.06 0.00 238469.37 21262.79 256318.58 00:28:11.785 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.785 Job: Nvme3n1 ended in about 0.75 seconds with error 00:28:11.785 Verification LBA range: start 0x0 length 0x400 00:28:11.785 Nvme3n1 : 0.75 171.37 10.71 85.69 0.00 233438.12 22913.33 257872.02 00:28:11.785 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.785 Job: Nvme4n1 ended in about 0.75 seconds with error 00:28:11.785 Verification LBA range: start 0x0 length 0x400 00:28:11.785 Nvme4n1 : 0.75 175.98 11.00 85.32 0.00 223787.11 23787.14 229910.00 00:28:11.785 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.785 Job: Nvme5n1 ended in about 0.76 seconds with error 00:28:11.785 Verification LBA range: start 0x0 length 0x400 00:28:11.785 Nvme5n1 : 0.76 84.72 5.29 84.72 0.00 336269.46 57089.14 257872.02 00:28:11.785 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.785 Job: Nvme6n1 ended in about 0.73 seconds with error 00:28:11.785 Verification LBA range: start 0x0 length 0x400 00:28:11.785 Nvme6n1 : 0.73 176.08 11.01 88.04 0.00 208427.24 7524.50 253211.69 00:28:11.785 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.785 Job: Nvme7n1 ended in about 0.76 seconds with error 00:28:11.785 Verification LBA range: start 0x0 length 0x400 00:28:11.785 Nvme7n1 : 0.76 84.36 5.27 84.36 0.00 319550.20 42331.40 274959.93 00:28:11.785 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.785 Job: Nvme8n1 ended in about 0.73 seconds with error 00:28:11.785 Verification LBA range: start 0x0 length 0x400 00:28:11.785 Nvme8n1 : 0.73 174.45 10.90 87.23 0.00 198743.74 14078.10 265639.25 00:28:11.785 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.785 Job: Nvme9n1 ended in about 0.76 seconds with error 00:28:11.785 Verification LBA range: start 0x0 length 0x400 00:28:11.785 Nvme9n1 : 0.76 84.01 5.25 84.01 0.00 303474.92 23107.51 295154.73 00:28:11.785 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.785 Job: Nvme10n1 ended in about 0.73 seconds with error 00:28:11.785 Verification LBA range: start 0x0 length 0x400 00:28:11.785 Nvme10n1 : 0.73 176.46 11.03 88.23 0.00 184205.59 7961.41 254765.13 00:28:11.785 =================================================================================================================== 00:28:11.785 Total : 1477.80 92.36 860.08 0.00 240525.97 7524.50 295154.73 00:28:11.785 [2024-07-15 20:33:50.164836] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:11.785 [2024-07-15 20:33:50.164934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:11.785 [2024-07-15 20:33:50.165387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.785 [2024-07-15 20:33:50.165433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a17b10 with addr=10.0.0.2, port=4420 00:28:11.785 [2024-07-15 20:33:50.165452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a17b10 is same with the state(5) to be set 00:28:11.785 [2024-07-15 20:33:50.165602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.785 [2024-07-15 20:33:50.165629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7950 with addr=10.0.0.2, port=4420 00:28:11.785 [2024-07-15 20:33:50.165645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7950 is same with the state(5) to be set 00:28:11.785 [2024-07-15 20:33:50.165671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e0ee0 (9): Bad file descriptor 00:28:11.785 [2024-07-15 20:33:50.165693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a198c0 (9): Bad file descriptor 00:28:11.785 [2024-07-15 20:33:50.165712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a19370 (9): Bad file descriptor 00:28:11.785 [2024-07-15 20:33:50.165771] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.785 [2024-07-15 20:33:50.165797] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.785 [2024-07-15 20:33:50.165819] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.785 [2024-07-15 20:33:50.165840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7950 (9): Bad file descriptor 00:28:11.785 [2024-07-15 20:33:50.165868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a17b10 (9): Bad file descriptor 00:28:11.785 [2024-07-15 20:33:50.166205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.785 [2024-07-15 20:33:50.166236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a393d0 with addr=10.0.0.2, port=4420 00:28:11.785 [2024-07-15 20:33:50.166253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a393d0 is same with the state(5) to be set 00:28:11.785 [2024-07-15 20:33:50.166416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.785 [2024-07-15 20:33:50.166443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1babc40 with addr=10.0.0.2, port=4420 00:28:11.785 [2024-07-15 20:33:50.166460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1babc40 is same with the state(5) to be set 00:28:11.785 [2024-07-15 20:33:50.166630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.785 [2024-07-15 20:33:50.166656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a40490 with addr=10.0.0.2, port=4420 00:28:11.786 [2024-07-15 20:33:50.166671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a40490 is same with the state(5) to be set 00:28:11.786 [2024-07-15 20:33:50.166825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.786 [2024-07-15 20:33:50.166852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150f610 with addr=10.0.0.2, port=4420 00:28:11.786 [2024-07-15 20:33:50.166868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150f610 is same with the state(5) to be set 00:28:11.786 [2024-07-15 20:33:50.167021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.786 [2024-07-15 20:33:50.167057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bbf030 with addr=10.0.0.2, port=4420 00:28:11.786 [2024-07-15 20:33:50.167074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf030 is same with the state(5) to be set 00:28:11.786 [2024-07-15 20:33:50.167093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:11.786 [2024-07-15 20:33:50.167107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:11.786 [2024-07-15 20:33:50.167122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:11.786 [2024-07-15 20:33:50.167143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:11.786 [2024-07-15 20:33:50.167158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:11.786 [2024-07-15 20:33:50.167171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:11.786 [2024-07-15 20:33:50.167187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:11.786 [2024-07-15 20:33:50.167201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:11.786 [2024-07-15 20:33:50.167214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:11.786 [2024-07-15 20:33:50.167249] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.786 [2024-07-15 20:33:50.167274] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.786 [2024-07-15 20:33:50.167292] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.786 [2024-07-15 20:33:50.167313] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.786 [2024-07-15 20:33:50.167333] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.786 [2024-07-15 20:33:50.168189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.786 [2024-07-15 20:33:50.168215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.786 [2024-07-15 20:33:50.168232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.786 [2024-07-15 20:33:50.168248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a393d0 (9): Bad file descriptor 00:28:11.786 [2024-07-15 20:33:50.168268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1babc40 (9): Bad file descriptor 00:28:11.786 [2024-07-15 20:33:50.168285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a40490 (9): Bad file descriptor 00:28:11.786 [2024-07-15 20:33:50.168302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x150f610 (9): Bad file descriptor 00:28:11.786 [2024-07-15 20:33:50.168320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbf030 (9): Bad file descriptor 00:28:11.786 [2024-07-15 20:33:50.168334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:11.786 [2024-07-15 20:33:50.168347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:11.786 [2024-07-15 20:33:50.168360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:11.786 [2024-07-15 20:33:50.168378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:11.786 [2024-07-15 20:33:50.168392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:11.786 [2024-07-15 20:33:50.168409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:11.786 [2024-07-15 20:33:50.168480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.786 [2024-07-15 20:33:50.168501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.786 [2024-07-15 20:33:50.168514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:11.786 [2024-07-15 20:33:50.168527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:11.786 [2024-07-15 20:33:50.168540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:11.786 [2024-07-15 20:33:50.168556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:11.786 [2024-07-15 20:33:50.168571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:11.786 [2024-07-15 20:33:50.168583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:11.786 [2024-07-15 20:33:50.168599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:11.786 [2024-07-15 20:33:50.168613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:11.786 [2024-07-15 20:33:50.168626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:11.786 [2024-07-15 20:33:50.168642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:11.786 [2024-07-15 20:33:50.168655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:11.786 [2024-07-15 20:33:50.168668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:11.786 [2024-07-15 20:33:50.168684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:11.786 [2024-07-15 20:33:50.168697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:11.786 [2024-07-15 20:33:50.168710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:11.786 [2024-07-15 20:33:50.168774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.786 [2024-07-15 20:33:50.168794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.786 [2024-07-15 20:33:50.168806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.786 [2024-07-15 20:33:50.168818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.786 [2024-07-15 20:33:50.168829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.354 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:12.354 20:33:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:13.290 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 4138416 00:28:13.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (4138416) - No such process 00:28:13.290 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:13.290 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:13.290 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:13.291 rmmod nvme_tcp 00:28:13.291 rmmod nvme_fabrics 00:28:13.291 rmmod nvme_keyring 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.291 20:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.847 20:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:15.847 00:28:15.847 real 0m7.188s 00:28:15.847 user 0m16.586s 00:28:15.847 sys 0m1.444s 00:28:15.847 20:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:15.847 20:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:15.847 ************************************ 00:28:15.847 END TEST nvmf_shutdown_tc3 00:28:15.847 ************************************ 00:28:15.847 20:33:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:15.847 20:33:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:15.847 00:28:15.847 real 0m27.162s 00:28:15.847 user 1m15.436s 00:28:15.847 sys 0m6.458s 00:28:15.847 20:33:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:15.847 20:33:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:15.847 ************************************ 00:28:15.847 END TEST nvmf_shutdown 00:28:15.847 ************************************ 00:28:15.847 20:33:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:15.847 20:33:53 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:15.847 20:33:53 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.847 20:33:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.847 20:33:53 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:15.847 20:33:53 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:15.847 20:33:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.847 20:33:53 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:15.848 20:33:53 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:15.848 20:33:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:15.848 20:33:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:15.848 20:33:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.848 ************************************ 00:28:15.848 START TEST nvmf_multicontroller 00:28:15.848 ************************************ 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:15.848 * Looking for test storage... 00:28:15.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:15.848 20:33:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:17.751 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:17.751 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:17.751 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:17.751 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:17.751 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:17.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:28:17.752 00:28:17.752 --- 10.0.0.2 ping statistics --- 00:28:17.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.752 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:28:17.752 00:28:17.752 --- 10.0.0.1 ping statistics --- 00:28:17.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.752 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=4140807 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 4140807 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 4140807 ']' 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:17.752 20:33:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.752 [2024-07-15 20:33:56.036329] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:28:17.752 [2024-07-15 20:33:56.036403] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.752 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.752 [2024-07-15 20:33:56.098449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:17.752 [2024-07-15 20:33:56.181549] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.752 [2024-07-15 20:33:56.181602] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.752 [2024-07-15 20:33:56.181629] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.752 [2024-07-15 20:33:56.181640] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.752 [2024-07-15 20:33:56.181650] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.752 [2024-07-15 20:33:56.181745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.752 [2024-07-15 20:33:56.181806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.752 [2024-07-15 20:33:56.181808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.011 [2024-07-15 20:33:56.330371] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.011 Malloc0 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.011 [2024-07-15 20:33:56.387802] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.011 [2024-07-15 20:33:56.395660] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.011 Malloc1 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.011 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4140834 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4140834 /var/tmp/bdevperf.sock 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 4140834 ']' 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:18.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:18.012 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.270 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:18.270 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:18.270 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:18.270 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.270 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.531 NVMe0n1 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.531 1 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.531 request: 00:28:18.531 { 00:28:18.531 "name": "NVMe0", 00:28:18.531 "trtype": "tcp", 00:28:18.531 "traddr": "10.0.0.2", 00:28:18.531 "adrfam": "ipv4", 00:28:18.531 "trsvcid": "4420", 00:28:18.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.531 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:18.531 "hostaddr": "10.0.0.2", 00:28:18.531 "hostsvcid": "60000", 00:28:18.531 "prchk_reftag": false, 00:28:18.531 "prchk_guard": false, 00:28:18.531 "hdgst": false, 00:28:18.531 "ddgst": false, 00:28:18.531 "method": "bdev_nvme_attach_controller", 00:28:18.531 "req_id": 1 00:28:18.531 } 00:28:18.531 Got JSON-RPC error response 00:28:18.531 response: 00:28:18.531 { 00:28:18.531 "code": -114, 00:28:18.531 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:18.531 } 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.531 request: 00:28:18.531 { 00:28:18.531 "name": "NVMe0", 00:28:18.531 "trtype": "tcp", 00:28:18.531 "traddr": "10.0.0.2", 00:28:18.531 "adrfam": "ipv4", 00:28:18.531 "trsvcid": "4420", 00:28:18.531 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:18.531 "hostaddr": "10.0.0.2", 00:28:18.531 "hostsvcid": "60000", 00:28:18.531 "prchk_reftag": false, 00:28:18.531 "prchk_guard": false, 00:28:18.531 "hdgst": false, 00:28:18.531 "ddgst": false, 00:28:18.531 "method": "bdev_nvme_attach_controller", 00:28:18.531 "req_id": 1 00:28:18.531 } 00:28:18.531 Got JSON-RPC error response 00:28:18.531 response: 00:28:18.531 { 00:28:18.531 "code": -114, 00:28:18.531 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:18.531 } 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.531 request: 00:28:18.531 { 00:28:18.531 "name": "NVMe0", 00:28:18.531 "trtype": "tcp", 00:28:18.531 "traddr": "10.0.0.2", 00:28:18.531 "adrfam": "ipv4", 00:28:18.531 "trsvcid": "4420", 00:28:18.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.531 "hostaddr": "10.0.0.2", 00:28:18.531 "hostsvcid": "60000", 00:28:18.531 "prchk_reftag": false, 00:28:18.531 "prchk_guard": false, 00:28:18.531 "hdgst": false, 00:28:18.531 "ddgst": false, 00:28:18.531 "multipath": "disable", 00:28:18.531 "method": "bdev_nvme_attach_controller", 00:28:18.531 "req_id": 1 00:28:18.531 } 00:28:18.531 Got JSON-RPC error response 00:28:18.531 response: 00:28:18.531 { 00:28:18.531 "code": -114, 00:28:18.531 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:18.531 } 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.531 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.531 request: 00:28:18.531 { 00:28:18.531 "name": "NVMe0", 00:28:18.531 "trtype": "tcp", 00:28:18.531 "traddr": "10.0.0.2", 00:28:18.531 "adrfam": "ipv4", 00:28:18.531 "trsvcid": "4420", 00:28:18.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.531 "hostaddr": "10.0.0.2", 00:28:18.531 "hostsvcid": "60000", 00:28:18.531 "prchk_reftag": false, 00:28:18.531 "prchk_guard": false, 00:28:18.532 "hdgst": false, 00:28:18.532 "ddgst": false, 00:28:18.532 "multipath": "failover", 00:28:18.532 "method": "bdev_nvme_attach_controller", 00:28:18.532 "req_id": 1 00:28:18.532 } 00:28:18.532 Got JSON-RPC error response 00:28:18.532 response: 00:28:18.532 { 00:28:18.532 "code": -114, 00:28:18.532 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:18.532 } 00:28:18.532 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:18.532 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:18.532 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:18.532 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:18.532 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:18.532 20:33:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:18.532 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.532 20:33:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.791 00:28:18.791 20:33:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.791 20:33:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:18.791 20:33:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.791 20:33:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.791 20:33:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.791 20:33:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:18.791 20:33:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.791 20:33:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.050 00:28:19.050 20:33:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.050 20:33:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:19.050 20:33:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:19.050 20:33:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.050 20:33:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.050 20:33:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.050 20:33:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:19.050 20:33:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:19.985 0 00:28:19.985 20:33:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:19.985 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.985 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.985 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.985 20:33:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 4140834 00:28:19.985 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 4140834 ']' 00:28:19.985 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 4140834 00:28:19.985 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:19.985 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:19.985 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4140834 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4140834' 00:28:20.244 killing process with pid 4140834 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 4140834 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 4140834 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:28:20.244 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:20.244 [2024-07-15 20:33:56.500665] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:28:20.244 [2024-07-15 20:33:56.500765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4140834 ] 00:28:20.244 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.244 [2024-07-15 20:33:56.562274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.244 [2024-07-15 20:33:56.649779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.244 [2024-07-15 20:33:57.337942] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 723da868-e43b-40e7-97b3-d29d2864f9a1 already exists 00:28:20.244 [2024-07-15 20:33:57.337983] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:723da868-e43b-40e7-97b3-d29d2864f9a1 alias for bdev NVMe1n1 00:28:20.244 [2024-07-15 20:33:57.337999] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:20.244 Running I/O for 1 seconds... 00:28:20.244 00:28:20.244 Latency(us) 00:28:20.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.244 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:20.244 NVMe0n1 : 1.00 18659.97 72.89 0.00 0.00 6849.22 4320.52 16019.91 00:28:20.244 =================================================================================================================== 00:28:20.244 Total : 18659.97 72.89 0.00 0.00 6849.22 4320.52 16019.91 00:28:20.244 Received shutdown signal, test time was about 1.000000 seconds 00:28:20.244 00:28:20.244 Latency(us) 00:28:20.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.244 =================================================================================================================== 00:28:20.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.244 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:20.244 20:33:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:20.244 rmmod nvme_tcp 00:28:20.505 rmmod nvme_fabrics 00:28:20.505 rmmod nvme_keyring 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 4140807 ']' 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 4140807 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 4140807 ']' 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 4140807 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4140807 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4140807' 00:28:20.505 killing process with pid 4140807 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 4140807 00:28:20.505 20:33:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 4140807 00:28:20.764 20:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:20.764 20:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:20.764 20:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:20.764 20:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:20.764 20:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:20.764 20:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.764 20:33:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.764 20:33:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.671 20:34:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:22.671 00:28:22.671 real 0m7.361s 00:28:22.671 user 0m11.647s 00:28:22.671 sys 0m2.298s 00:28:22.671 20:34:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:22.671 20:34:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.671 ************************************ 00:28:22.671 END TEST nvmf_multicontroller 00:28:22.671 ************************************ 00:28:22.929 20:34:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:22.929 20:34:01 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:22.929 20:34:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:22.929 20:34:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:22.929 20:34:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:22.929 ************************************ 00:28:22.929 START TEST nvmf_aer 00:28:22.929 ************************************ 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:22.929 * Looking for test storage... 00:28:22.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:22.929 20:34:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:24.834 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:24.834 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:24.834 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:24.834 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:24.834 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:24.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:28:24.835 00:28:24.835 --- 10.0.0.2 ping statistics --- 00:28:24.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.835 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:28:24.835 00:28:24.835 --- 10.0.0.1 ping statistics --- 00:28:24.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.835 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:24.835 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:25.111 20:34:03 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:25.111 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:25.111 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:25.111 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.111 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=4143045 00:28:25.111 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:25.111 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 4143045 00:28:25.111 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 4143045 ']' 00:28:25.111 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.111 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:25.111 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.111 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:25.111 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.111 [2024-07-15 20:34:03.424069] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:28:25.111 [2024-07-15 20:34:03.424149] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.111 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.111 [2024-07-15 20:34:03.490555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.111 [2024-07-15 20:34:03.575759] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.111 [2024-07-15 20:34:03.575810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.111 [2024-07-15 20:34:03.575838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.111 [2024-07-15 20:34:03.575849] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.111 [2024-07-15 20:34:03.575858] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.111 [2024-07-15 20:34:03.575943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.111 [2024-07-15 20:34:03.576016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.111 [2024-07-15 20:34:03.576084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.111 [2024-07-15 20:34:03.576087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.371 [2024-07-15 20:34:03.729842] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.371 Malloc0 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.371 [2024-07-15 20:34:03.783045] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.371 [ 00:28:25.371 { 00:28:25.371 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:25.371 "subtype": "Discovery", 00:28:25.371 "listen_addresses": [], 00:28:25.371 "allow_any_host": true, 00:28:25.371 "hosts": [] 00:28:25.371 }, 00:28:25.371 { 00:28:25.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:25.371 "subtype": "NVMe", 00:28:25.371 "listen_addresses": [ 00:28:25.371 { 00:28:25.371 "trtype": "TCP", 00:28:25.371 "adrfam": "IPv4", 00:28:25.371 "traddr": "10.0.0.2", 00:28:25.371 "trsvcid": "4420" 00:28:25.371 } 00:28:25.371 ], 00:28:25.371 "allow_any_host": true, 00:28:25.371 "hosts": [], 00:28:25.371 "serial_number": "SPDK00000000000001", 00:28:25.371 "model_number": "SPDK bdev Controller", 00:28:25.371 "max_namespaces": 2, 00:28:25.371 "min_cntlid": 1, 00:28:25.371 "max_cntlid": 65519, 00:28:25.371 "namespaces": [ 00:28:25.371 { 00:28:25.371 "nsid": 1, 00:28:25.371 "bdev_name": "Malloc0", 00:28:25.371 "name": "Malloc0", 00:28:25.371 "nguid": "88427BD6020548F08CDDCD10AC1A3855", 00:28:25.371 "uuid": "88427bd6-0205-48f0-8cdd-cd10ac1a3855" 00:28:25.371 } 00:28:25.371 ] 00:28:25.371 } 00:28:25.371 ] 00:28:25.371 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.372 20:34:03 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:25.372 20:34:03 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:25.372 20:34:03 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=4143181 00:28:25.372 20:34:03 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:25.372 20:34:03 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:25.372 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:25.372 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:25.372 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:25.372 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:25.372 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:25.372 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.631 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:25.631 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:25.631 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:25.631 20:34:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.631 Malloc1 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.631 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.631 [ 00:28:25.631 { 00:28:25.631 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:25.631 "subtype": "Discovery", 00:28:25.631 "listen_addresses": [], 00:28:25.631 "allow_any_host": true, 00:28:25.631 "hosts": [] 00:28:25.631 }, 00:28:25.631 { 00:28:25.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:25.631 "subtype": "NVMe", 00:28:25.631 "listen_addresses": [ 00:28:25.631 { 00:28:25.631 "trtype": "TCP", 00:28:25.631 "adrfam": "IPv4", 00:28:25.631 "traddr": "10.0.0.2", 00:28:25.632 "trsvcid": "4420" 00:28:25.632 } 00:28:25.632 ], 00:28:25.632 "allow_any_host": true, 00:28:25.632 "hosts": [], 00:28:25.632 "serial_number": "SPDK00000000000001", 00:28:25.632 "model_number": "SPDK bdev Controller", 00:28:25.632 "max_namespaces": 2, 00:28:25.632 "min_cntlid": 1, 00:28:25.632 "max_cntlid": 65519, 00:28:25.632 "namespaces": [ 00:28:25.632 { 00:28:25.632 "nsid": 1, 00:28:25.632 "bdev_name": "Malloc0", 00:28:25.632 "name": "Malloc0", 00:28:25.632 "nguid": "88427BD6020548F08CDDCD10AC1A3855", 00:28:25.632 "uuid": "88427bd6-0205-48f0-8cdd-cd10ac1a3855" 00:28:25.632 }, 00:28:25.632 { 00:28:25.632 "nsid": 2, 00:28:25.632 "bdev_name": "Malloc1", 00:28:25.632 "name": "Malloc1", 00:28:25.632 "nguid": "F6589018C3AA49259EF6688B09E26C6E", 00:28:25.632 "uuid": "f6589018-c3aa-4925-9ef6-688b09e26c6e" 00:28:25.632 } 00:28:25.632 ] 00:28:25.632 } 00:28:25.632 ] 00:28:25.632 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.632 20:34:04 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 4143181 00:28:25.632 Asynchronous Event Request test 00:28:25.632 Attaching to 10.0.0.2 00:28:25.632 Attached to 10.0.0.2 00:28:25.632 Registering asynchronous event callbacks... 00:28:25.632 Starting namespace attribute notice tests for all controllers... 00:28:25.632 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:25.632 aer_cb - Changed Namespace 00:28:25.632 Cleaning up... 00:28:25.632 20:34:04 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:25.632 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.632 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.632 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.632 20:34:04 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:25.632 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.632 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.632 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.632 20:34:04 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:25.632 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.632 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:25.890 rmmod nvme_tcp 00:28:25.890 rmmod nvme_fabrics 00:28:25.890 rmmod nvme_keyring 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 4143045 ']' 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 4143045 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 4143045 ']' 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 4143045 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4143045 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4143045' 00:28:25.890 killing process with pid 4143045 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 4143045 00:28:25.890 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 4143045 00:28:26.151 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:26.151 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:26.151 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:26.151 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:26.151 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:26.151 20:34:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.151 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.151 20:34:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.058 20:34:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:28.058 00:28:28.058 real 0m5.285s 00:28:28.058 user 0m4.197s 00:28:28.058 sys 0m1.847s 00:28:28.058 20:34:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:28.058 20:34:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.058 ************************************ 00:28:28.058 END TEST nvmf_aer 00:28:28.058 ************************************ 00:28:28.058 20:34:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:28.058 20:34:06 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:28.058 20:34:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:28.058 20:34:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:28.058 20:34:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:28.058 ************************************ 00:28:28.058 START TEST nvmf_async_init 00:28:28.058 ************************************ 00:28:28.058 20:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:28.316 * Looking for test storage... 00:28:28.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:28.316 20:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:28.316 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=c7f43bdaf05e4002972630f1e492922e 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:28.317 20:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.227 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:30.227 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:30.227 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:30.227 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:30.227 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:30.227 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:30.227 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:30.227 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:30.227 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:30.227 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:30.227 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:30.227 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:30.228 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:30.228 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:30.228 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:30.228 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:30.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:28:30.228 00:28:30.228 --- 10.0.0.2 ping statistics --- 00:28:30.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.228 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:28:30.228 00:28:30.228 --- 10.0.0.1 ping statistics --- 00:28:30.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.228 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=4145113 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 4145113 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 4145113 ']' 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:30.228 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.228 [2024-07-15 20:34:08.623782] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:28:30.228 [2024-07-15 20:34:08.623885] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.228 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.228 [2024-07-15 20:34:08.696786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.487 [2024-07-15 20:34:08.791046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.487 [2024-07-15 20:34:08.791112] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.487 [2024-07-15 20:34:08.791128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.487 [2024-07-15 20:34:08.791141] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.487 [2024-07-15 20:34:08.791155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.487 [2024-07-15 20:34:08.791195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.487 [2024-07-15 20:34:08.942328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.487 null0 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c7f43bdaf05e4002972630f1e492922e 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.487 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.488 20:34:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:30.488 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.488 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.488 [2024-07-15 20:34:08.982588] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.488 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.488 20:34:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:30.488 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.488 20:34:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.747 nvme0n1 00:28:30.747 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.747 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:30.747 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.747 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.747 [ 00:28:30.747 { 00:28:30.747 "name": "nvme0n1", 00:28:30.747 "aliases": [ 00:28:30.747 "c7f43bda-f05e-4002-9726-30f1e492922e" 00:28:30.747 ], 00:28:30.747 "product_name": "NVMe disk", 00:28:30.748 "block_size": 512, 00:28:30.748 "num_blocks": 2097152, 00:28:30.748 "uuid": "c7f43bda-f05e-4002-9726-30f1e492922e", 00:28:30.748 "assigned_rate_limits": { 00:28:30.748 "rw_ios_per_sec": 0, 00:28:30.748 "rw_mbytes_per_sec": 0, 00:28:30.748 "r_mbytes_per_sec": 0, 00:28:30.748 "w_mbytes_per_sec": 0 00:28:30.748 }, 00:28:30.748 "claimed": false, 00:28:30.748 "zoned": false, 00:28:30.748 "supported_io_types": { 00:28:30.748 "read": true, 00:28:30.748 "write": true, 00:28:30.748 "unmap": false, 00:28:30.748 "flush": true, 00:28:30.748 "reset": true, 00:28:30.748 "nvme_admin": true, 00:28:30.748 "nvme_io": true, 00:28:30.748 "nvme_io_md": false, 00:28:30.748 "write_zeroes": true, 00:28:30.748 "zcopy": false, 00:28:30.748 "get_zone_info": false, 00:28:30.748 "zone_management": false, 00:28:30.748 "zone_append": false, 00:28:30.748 "compare": true, 00:28:30.748 "compare_and_write": true, 00:28:30.748 "abort": true, 00:28:30.748 "seek_hole": false, 00:28:30.748 "seek_data": false, 00:28:30.748 "copy": true, 00:28:30.748 "nvme_iov_md": false 00:28:30.748 }, 00:28:30.748 "memory_domains": [ 00:28:30.748 { 00:28:30.748 "dma_device_id": "system", 00:28:30.748 "dma_device_type": 1 00:28:30.748 } 00:28:30.748 ], 00:28:30.748 "driver_specific": { 00:28:30.748 "nvme": [ 00:28:30.748 { 00:28:30.748 "trid": { 00:28:30.748 "trtype": "TCP", 00:28:30.748 "adrfam": "IPv4", 00:28:30.748 "traddr": "10.0.0.2", 00:28:30.748 "trsvcid": "4420", 00:28:30.748 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:30.748 }, 00:28:30.748 "ctrlr_data": { 00:28:30.748 "cntlid": 1, 00:28:30.748 "vendor_id": "0x8086", 00:28:30.748 "model_number": "SPDK bdev Controller", 00:28:30.748 "serial_number": "00000000000000000000", 00:28:30.748 "firmware_revision": "24.09", 00:28:30.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:30.748 "oacs": { 00:28:30.748 "security": 0, 00:28:30.748 "format": 0, 00:28:30.748 "firmware": 0, 00:28:30.748 "ns_manage": 0 00:28:30.748 }, 00:28:30.748 "multi_ctrlr": true, 00:28:30.748 "ana_reporting": false 00:28:30.748 }, 00:28:30.748 "vs": { 00:28:30.748 "nvme_version": "1.3" 00:28:30.748 }, 00:28:30.748 "ns_data": { 00:28:30.748 "id": 1, 00:28:30.748 "can_share": true 00:28:30.748 } 00:28:30.748 } 00:28:30.748 ], 00:28:30.748 "mp_policy": "active_passive" 00:28:30.748 } 00:28:30.748 } 00:28:30.748 ] 00:28:30.748 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.748 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:30.748 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.748 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.748 [2024-07-15 20:34:09.235703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:30.748 [2024-07-15 20:34:09.235792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223dc40 (9): Bad file descriptor 00:28:31.007 [2024-07-15 20:34:09.378041] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.007 [ 00:28:31.007 { 00:28:31.007 "name": "nvme0n1", 00:28:31.007 "aliases": [ 00:28:31.007 "c7f43bda-f05e-4002-9726-30f1e492922e" 00:28:31.007 ], 00:28:31.007 "product_name": "NVMe disk", 00:28:31.007 "block_size": 512, 00:28:31.007 "num_blocks": 2097152, 00:28:31.007 "uuid": "c7f43bda-f05e-4002-9726-30f1e492922e", 00:28:31.007 "assigned_rate_limits": { 00:28:31.007 "rw_ios_per_sec": 0, 00:28:31.007 "rw_mbytes_per_sec": 0, 00:28:31.007 "r_mbytes_per_sec": 0, 00:28:31.007 "w_mbytes_per_sec": 0 00:28:31.007 }, 00:28:31.007 "claimed": false, 00:28:31.007 "zoned": false, 00:28:31.007 "supported_io_types": { 00:28:31.007 "read": true, 00:28:31.007 "write": true, 00:28:31.007 "unmap": false, 00:28:31.007 "flush": true, 00:28:31.007 "reset": true, 00:28:31.007 "nvme_admin": true, 00:28:31.007 "nvme_io": true, 00:28:31.007 "nvme_io_md": false, 00:28:31.007 "write_zeroes": true, 00:28:31.007 "zcopy": false, 00:28:31.007 "get_zone_info": false, 00:28:31.007 "zone_management": false, 00:28:31.007 "zone_append": false, 00:28:31.007 "compare": true, 00:28:31.007 "compare_and_write": true, 00:28:31.007 "abort": true, 00:28:31.007 "seek_hole": false, 00:28:31.007 "seek_data": false, 00:28:31.007 "copy": true, 00:28:31.007 "nvme_iov_md": false 00:28:31.007 }, 00:28:31.007 "memory_domains": [ 00:28:31.007 { 00:28:31.007 "dma_device_id": "system", 00:28:31.007 "dma_device_type": 1 00:28:31.007 } 00:28:31.007 ], 00:28:31.007 "driver_specific": { 00:28:31.007 "nvme": [ 00:28:31.007 { 00:28:31.007 "trid": { 00:28:31.007 "trtype": "TCP", 00:28:31.007 "adrfam": "IPv4", 00:28:31.007 "traddr": "10.0.0.2", 00:28:31.007 "trsvcid": "4420", 00:28:31.007 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:31.007 }, 00:28:31.007 "ctrlr_data": { 00:28:31.007 "cntlid": 2, 00:28:31.007 "vendor_id": "0x8086", 00:28:31.007 "model_number": "SPDK bdev Controller", 00:28:31.007 "serial_number": "00000000000000000000", 00:28:31.007 "firmware_revision": "24.09", 00:28:31.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.007 "oacs": { 00:28:31.007 "security": 0, 00:28:31.007 "format": 0, 00:28:31.007 "firmware": 0, 00:28:31.007 "ns_manage": 0 00:28:31.007 }, 00:28:31.007 "multi_ctrlr": true, 00:28:31.007 "ana_reporting": false 00:28:31.007 }, 00:28:31.007 "vs": { 00:28:31.007 "nvme_version": "1.3" 00:28:31.007 }, 00:28:31.007 "ns_data": { 00:28:31.007 "id": 1, 00:28:31.007 "can_share": true 00:28:31.007 } 00:28:31.007 } 00:28:31.007 ], 00:28:31.007 "mp_policy": "active_passive" 00:28:31.007 } 00:28:31.007 } 00:28:31.007 ] 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.XOxbKa9Wm5 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.XOxbKa9Wm5 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.007 [2024-07-15 20:34:09.424462] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:31.007 [2024-07-15 20:34:09.424636] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XOxbKa9Wm5 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.007 [2024-07-15 20:34:09.432469] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XOxbKa9Wm5 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.007 [2024-07-15 20:34:09.440502] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:31.007 [2024-07-15 20:34:09.440567] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:31.007 nvme0n1 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.007 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.007 [ 00:28:31.007 { 00:28:31.007 "name": "nvme0n1", 00:28:31.007 "aliases": [ 00:28:31.007 "c7f43bda-f05e-4002-9726-30f1e492922e" 00:28:31.007 ], 00:28:31.007 "product_name": "NVMe disk", 00:28:31.007 "block_size": 512, 00:28:31.007 "num_blocks": 2097152, 00:28:31.007 "uuid": "c7f43bda-f05e-4002-9726-30f1e492922e", 00:28:31.007 "assigned_rate_limits": { 00:28:31.007 "rw_ios_per_sec": 0, 00:28:31.007 "rw_mbytes_per_sec": 0, 00:28:31.007 "r_mbytes_per_sec": 0, 00:28:31.007 "w_mbytes_per_sec": 0 00:28:31.007 }, 00:28:31.007 "claimed": false, 00:28:31.007 "zoned": false, 00:28:31.007 "supported_io_types": { 00:28:31.007 "read": true, 00:28:31.007 "write": true, 00:28:31.008 "unmap": false, 00:28:31.008 "flush": true, 00:28:31.008 "reset": true, 00:28:31.008 "nvme_admin": true, 00:28:31.008 "nvme_io": true, 00:28:31.008 "nvme_io_md": false, 00:28:31.008 "write_zeroes": true, 00:28:31.008 "zcopy": false, 00:28:31.008 "get_zone_info": false, 00:28:31.008 "zone_management": false, 00:28:31.008 "zone_append": false, 00:28:31.008 "compare": true, 00:28:31.008 "compare_and_write": true, 00:28:31.008 "abort": true, 00:28:31.008 "seek_hole": false, 00:28:31.008 "seek_data": false, 00:28:31.008 "copy": true, 00:28:31.008 "nvme_iov_md": false 00:28:31.008 }, 00:28:31.008 "memory_domains": [ 00:28:31.008 { 00:28:31.008 "dma_device_id": "system", 00:28:31.008 "dma_device_type": 1 00:28:31.008 } 00:28:31.008 ], 00:28:31.008 "driver_specific": { 00:28:31.008 "nvme": [ 00:28:31.008 { 00:28:31.008 "trid": { 00:28:31.008 "trtype": "TCP", 00:28:31.008 "adrfam": "IPv4", 00:28:31.008 "traddr": "10.0.0.2", 00:28:31.008 "trsvcid": "4421", 00:28:31.008 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:31.008 }, 00:28:31.008 "ctrlr_data": { 00:28:31.008 "cntlid": 3, 00:28:31.008 "vendor_id": "0x8086", 00:28:31.008 "model_number": "SPDK bdev Controller", 00:28:31.008 "serial_number": "00000000000000000000", 00:28:31.008 "firmware_revision": "24.09", 00:28:31.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.008 "oacs": { 00:28:31.008 "security": 0, 00:28:31.008 "format": 0, 00:28:31.008 "firmware": 0, 00:28:31.008 "ns_manage": 0 00:28:31.008 }, 00:28:31.008 "multi_ctrlr": true, 00:28:31.008 "ana_reporting": false 00:28:31.008 }, 00:28:31.008 "vs": { 00:28:31.008 "nvme_version": "1.3" 00:28:31.008 }, 00:28:31.008 "ns_data": { 00:28:31.008 "id": 1, 00:28:31.008 "can_share": true 00:28:31.008 } 00:28:31.008 } 00:28:31.008 ], 00:28:31.008 "mp_policy": "active_passive" 00:28:31.008 } 00:28:31.008 } 00:28:31.008 ] 00:28:31.008 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.008 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.008 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.008 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.XOxbKa9Wm5 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:31.266 rmmod nvme_tcp 00:28:31.266 rmmod nvme_fabrics 00:28:31.266 rmmod nvme_keyring 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 4145113 ']' 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 4145113 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 4145113 ']' 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 4145113 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4145113 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4145113' 00:28:31.266 killing process with pid 4145113 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 4145113 00:28:31.266 [2024-07-15 20:34:09.619036] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:31.266 [2024-07-15 20:34:09.619073] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:31.266 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 4145113 00:28:31.524 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:31.524 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:31.524 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:31.524 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:31.524 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:31.524 20:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.524 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.524 20:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.469 20:34:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:33.469 00:28:33.469 real 0m5.274s 00:28:33.469 user 0m1.995s 00:28:33.469 sys 0m1.692s 00:28:33.469 20:34:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:33.469 20:34:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.469 ************************************ 00:28:33.469 END TEST nvmf_async_init 00:28:33.469 ************************************ 00:28:33.469 20:34:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:33.469 20:34:11 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:33.469 20:34:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:33.469 20:34:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.469 20:34:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.469 ************************************ 00:28:33.469 START TEST dma 00:28:33.469 ************************************ 00:28:33.469 20:34:11 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:33.469 * Looking for test storage... 00:28:33.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.469 20:34:11 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.469 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.469 20:34:11 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.469 20:34:11 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.469 20:34:11 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.469 20:34:11 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.470 20:34:11 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.470 20:34:11 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.470 20:34:11 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:33.470 20:34:11 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.470 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:33.470 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:33.470 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:33.470 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.470 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.470 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.470 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:33.470 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:33.470 20:34:11 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:33.470 20:34:11 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:33.470 20:34:11 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:33.470 00:28:33.470 real 0m0.066s 00:28:33.470 user 0m0.043s 00:28:33.470 sys 0m0.027s 00:28:33.470 20:34:11 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:33.470 20:34:11 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:33.470 ************************************ 00:28:33.470 END TEST dma 00:28:33.470 ************************************ 00:28:33.470 20:34:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:33.470 20:34:11 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:33.470 20:34:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:33.470 20:34:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.470 20:34:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.728 ************************************ 00:28:33.728 START TEST nvmf_identify 00:28:33.728 ************************************ 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:33.728 * Looking for test storage... 00:28:33.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.728 20:34:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:33.729 20:34:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:35.633 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:35.633 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:35.633 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:35.633 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:35.633 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:35.633 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:35.633 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:35.633 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:35.634 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:35.634 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:35.634 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:35.634 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:35.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:28:35.634 00:28:35.634 --- 10.0.0.2 ping statistics --- 00:28:35.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.634 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:28:35.634 00:28:35.634 --- 10.0.0.1 ping statistics --- 00:28:35.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.634 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:35.634 20:34:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:35.634 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:35.634 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:35.634 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:35.634 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4147241 00:28:35.634 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:35.634 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:35.634 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4147241 00:28:35.634 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 4147241 ']' 00:28:35.634 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.634 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:35.634 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.634 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:35.634 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:35.634 [2024-07-15 20:34:14.047180] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:28:35.634 [2024-07-15 20:34:14.047270] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.634 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.634 [2024-07-15 20:34:14.111743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:35.902 [2024-07-15 20:34:14.198241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.902 [2024-07-15 20:34:14.198295] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.902 [2024-07-15 20:34:14.198317] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:35.902 [2024-07-15 20:34:14.198328] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:35.902 [2024-07-15 20:34:14.198337] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.902 [2024-07-15 20:34:14.198499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.902 [2024-07-15 20:34:14.198567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.902 [2024-07-15 20:34:14.198633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.902 [2024-07-15 20:34:14.198635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:35.902 [2024-07-15 20:34:14.325732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:35.902 Malloc0 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:35.902 [2024-07-15 20:34:14.407525] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.902 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:35.902 [ 00:28:35.902 { 00:28:35.902 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:35.903 "subtype": "Discovery", 00:28:35.903 "listen_addresses": [ 00:28:35.903 { 00:28:35.903 "trtype": "TCP", 00:28:35.903 "adrfam": "IPv4", 00:28:35.903 "traddr": "10.0.0.2", 00:28:35.903 "trsvcid": "4420" 00:28:35.903 } 00:28:35.903 ], 00:28:35.903 "allow_any_host": true, 00:28:35.903 "hosts": [] 00:28:35.903 }, 00:28:35.903 { 00:28:35.903 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.903 "subtype": "NVMe", 00:28:35.903 "listen_addresses": [ 00:28:35.903 { 00:28:35.903 "trtype": "TCP", 00:28:35.903 "adrfam": "IPv4", 00:28:35.903 "traddr": "10.0.0.2", 00:28:35.903 "trsvcid": "4420" 00:28:35.903 } 00:28:35.903 ], 00:28:35.903 "allow_any_host": true, 00:28:35.903 "hosts": [], 00:28:35.903 "serial_number": "SPDK00000000000001", 00:28:35.903 "model_number": "SPDK bdev Controller", 00:28:35.903 "max_namespaces": 32, 00:28:35.903 "min_cntlid": 1, 00:28:35.903 "max_cntlid": 65519, 00:28:35.903 "namespaces": [ 00:28:35.903 { 00:28:35.903 "nsid": 1, 00:28:35.903 "bdev_name": "Malloc0", 00:28:35.903 "name": "Malloc0", 00:28:35.903 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:35.903 "eui64": "ABCDEF0123456789", 00:28:35.903 "uuid": "f517b70f-e6fb-47ea-99f7-0ec061ee7a20" 00:28:35.903 } 00:28:35.903 ] 00:28:35.903 } 00:28:35.903 ] 00:28:35.903 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.903 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:36.163 [2024-07-15 20:34:14.449793] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:28:36.163 [2024-07-15 20:34:14.449838] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4147264 ] 00:28:36.163 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.163 [2024-07-15 20:34:14.485238] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:36.163 [2024-07-15 20:34:14.485303] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:36.163 [2024-07-15 20:34:14.485319] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:36.163 [2024-07-15 20:34:14.485335] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:36.163 [2024-07-15 20:34:14.485345] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:36.163 [2024-07-15 20:34:14.485640] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:36.163 [2024-07-15 20:34:14.485698] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7c8ae0 0 00:28:36.163 [2024-07-15 20:34:14.491896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:36.163 [2024-07-15 20:34:14.491915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:36.163 [2024-07-15 20:34:14.491923] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:36.163 [2024-07-15 20:34:14.491929] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:36.163 [2024-07-15 20:34:14.491983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.163 [2024-07-15 20:34:14.491996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.163 [2024-07-15 20:34:14.492003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c8ae0) 00:28:36.163 [2024-07-15 20:34:14.492019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:36.163 [2024-07-15 20:34:14.492053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f240, cid 0, qid 0 00:28:36.163 [2024-07-15 20:34:14.502890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.163 [2024-07-15 20:34:14.502908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.163 [2024-07-15 20:34:14.502915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.163 [2024-07-15 20:34:14.502938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f240) on tqpair=0x7c8ae0 00:28:36.163 [2024-07-15 20:34:14.502954] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:36.164 [2024-07-15 20:34:14.502965] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:36.164 [2024-07-15 20:34:14.502974] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:36.164 [2024-07-15 20:34:14.502996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.164 [2024-07-15 20:34:14.503005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.164 [2024-07-15 20:34:14.503011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c8ae0) 00:28:36.164 [2024-07-15 20:34:14.503022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.164 [2024-07-15 20:34:14.503046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f240, cid 0, qid 0 00:28:36.164 [2024-07-15 20:34:14.503232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.164 [2024-07-15 20:34:14.503244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.164 [2024-07-15 20:34:14.503251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.164 [2024-07-15 20:34:14.503258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f240) on tqpair=0x7c8ae0 00:28:36.164 [2024-07-15 20:34:14.503267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:36.164 [2024-07-15 20:34:14.503280] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:36.164 [2024-07-15 20:34:14.503292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.164 [2024-07-15 20:34:14.503299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.164 [2024-07-15 20:34:14.503306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c8ae0) 00:28:36.164 [2024-07-15 20:34:14.503320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.164 [2024-07-15 20:34:14.503342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f240, cid 0, qid 0 00:28:36.164 [2024-07-15 20:34:14.503503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.164 [2024-07-15 20:34:14.503515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.164 [2024-07-15 20:34:14.503522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.164 [2024-07-15 20:34:14.503529] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f240) on tqpair=0x7c8ae0 00:28:36.164 [2024-07-15 20:34:14.503537] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:36.164 [2024-07-15 20:34:14.503551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:36.164 [2024-07-15 20:34:14.503563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.164 [2024-07-15 20:34:14.503571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.164 [2024-07-15 20:34:14.503577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c8ae0) 00:28:36.164 [2024-07-15 20:34:14.503587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.164 [2024-07-15 20:34:14.503607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f240, cid 0, qid 0 00:28:36.164 [2024-07-15 20:34:14.503754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.164 [2024-07-15 20:34:14.503770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.164 [2024-07-15 20:34:14.503777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.164 [2024-07-15 20:34:14.503783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f240) on tqpair=0x7c8ae0 00:28:36.164 [2024-07-15 20:34:14.503792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:36.164 [2024-07-15 20:34:14.503809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.164 [2024-07-15 20:34:14.503819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.164 [2024-07-15 20:34:14.503825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c8ae0) 00:28:36.164 [2024-07-15 20:34:14.503835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.164 [2024-07-15 20:34:14.503856] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f240, cid 0, qid 0 00:28:36.164 [2024-07-15 20:34:14.504005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.164 [2024-07-15 20:34:14.504020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.164 [2024-07-15 20:34:14.504027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.164 [2024-07-15 20:34:14.504034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f240) on tqpair=0x7c8ae0 00:28:36.164 [2024-07-15 20:34:14.504042] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:36.164 [2024-07-15 20:34:14.504050] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:36.165 [2024-07-15 20:34:14.504063] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:36.165 [2024-07-15 20:34:14.504173] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:36.165 [2024-07-15 20:34:14.504182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:36.165 [2024-07-15 20:34:14.504195] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.504225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.504232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c8ae0) 00:28:36.165 [2024-07-15 20:34:14.504243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.165 [2024-07-15 20:34:14.504264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f240, cid 0, qid 0 00:28:36.165 [2024-07-15 20:34:14.504439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.165 [2024-07-15 20:34:14.504452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.165 [2024-07-15 20:34:14.504459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.504466] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f240) on tqpair=0x7c8ae0 00:28:36.165 [2024-07-15 20:34:14.504474] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:36.165 [2024-07-15 20:34:14.504490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.504499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.504506] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c8ae0) 00:28:36.165 [2024-07-15 20:34:14.504516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.165 [2024-07-15 20:34:14.504537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f240, cid 0, qid 0 00:28:36.165 [2024-07-15 20:34:14.504679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.165 [2024-07-15 20:34:14.504691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.165 [2024-07-15 20:34:14.504697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.504704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f240) on tqpair=0x7c8ae0 00:28:36.165 [2024-07-15 20:34:14.504712] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:36.165 [2024-07-15 20:34:14.504720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:36.165 [2024-07-15 20:34:14.504733] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:36.165 [2024-07-15 20:34:14.504748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:36.165 [2024-07-15 20:34:14.504763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.504771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c8ae0) 00:28:36.165 [2024-07-15 20:34:14.504781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.165 [2024-07-15 20:34:14.504802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f240, cid 0, qid 0 00:28:36.165 [2024-07-15 20:34:14.505017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.165 [2024-07-15 20:34:14.505031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.165 [2024-07-15 20:34:14.505038] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.505045] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7c8ae0): datao=0, datal=4096, cccid=0 00:28:36.165 [2024-07-15 20:34:14.505053] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x81f240) on tqpair(0x7c8ae0): expected_datao=0, payload_size=4096 00:28:36.165 [2024-07-15 20:34:14.505061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.505072] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.505084] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.505121] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.165 [2024-07-15 20:34:14.505133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.165 [2024-07-15 20:34:14.505139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.505146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f240) on tqpair=0x7c8ae0 00:28:36.165 [2024-07-15 20:34:14.505158] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:36.165 [2024-07-15 20:34:14.505171] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:36.165 [2024-07-15 20:34:14.505179] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:36.165 [2024-07-15 20:34:14.505188] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:36.165 [2024-07-15 20:34:14.505196] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:36.165 [2024-07-15 20:34:14.505204] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:36.165 [2024-07-15 20:34:14.505219] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:36.165 [2024-07-15 20:34:14.505231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.505238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.165 [2024-07-15 20:34:14.505245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c8ae0) 00:28:36.165 [2024-07-15 20:34:14.505255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:36.166 [2024-07-15 20:34:14.505277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f240, cid 0, qid 0 00:28:36.166 [2024-07-15 20:34:14.505483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.166 [2024-07-15 20:34:14.505496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.166 [2024-07-15 20:34:14.505503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.505509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f240) on tqpair=0x7c8ae0 00:28:36.166 [2024-07-15 20:34:14.505521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.505529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.505535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c8ae0) 00:28:36.166 [2024-07-15 20:34:14.505545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.166 [2024-07-15 20:34:14.505555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.505561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.505568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7c8ae0) 00:28:36.166 [2024-07-15 20:34:14.505576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.166 [2024-07-15 20:34:14.505601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.505608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.505614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7c8ae0) 00:28:36.166 [2024-07-15 20:34:14.505622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.166 [2024-07-15 20:34:14.505631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.505641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.505648] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c8ae0) 00:28:36.166 [2024-07-15 20:34:14.505656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.166 [2024-07-15 20:34:14.505664] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:36.166 [2024-07-15 20:34:14.505683] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:36.166 [2024-07-15 20:34:14.505695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.505716] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7c8ae0) 00:28:36.166 [2024-07-15 20:34:14.505726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.166 [2024-07-15 20:34:14.505747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f240, cid 0, qid 0 00:28:36.166 [2024-07-15 20:34:14.505758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f3c0, cid 1, qid 0 00:28:36.166 [2024-07-15 20:34:14.505778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f540, cid 2, qid 0 00:28:36.166 [2024-07-15 20:34:14.505787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f6c0, cid 3, qid 0 00:28:36.166 [2024-07-15 20:34:14.505794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f840, cid 4, qid 0 00:28:36.166 [2024-07-15 20:34:14.506023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.166 [2024-07-15 20:34:14.506037] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.166 [2024-07-15 20:34:14.506043] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.506050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f840) on tqpair=0x7c8ae0 00:28:36.166 [2024-07-15 20:34:14.506059] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:36.166 [2024-07-15 20:34:14.506068] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:36.166 [2024-07-15 20:34:14.506085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.506094] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7c8ae0) 00:28:36.166 [2024-07-15 20:34:14.506105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.166 [2024-07-15 20:34:14.506125] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f840, cid 4, qid 0 00:28:36.166 [2024-07-15 20:34:14.506371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.166 [2024-07-15 20:34:14.506386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.166 [2024-07-15 20:34:14.506393] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.506400] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7c8ae0): datao=0, datal=4096, cccid=4 00:28:36.166 [2024-07-15 20:34:14.506407] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x81f840) on tqpair(0x7c8ae0): expected_datao=0, payload_size=4096 00:28:36.166 [2024-07-15 20:34:14.506415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.506425] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.506433] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.550889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.166 [2024-07-15 20:34:14.550907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.166 [2024-07-15 20:34:14.550919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.550926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f840) on tqpair=0x7c8ae0 00:28:36.166 [2024-07-15 20:34:14.550945] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:36.166 [2024-07-15 20:34:14.550981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.550991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7c8ae0) 00:28:36.166 [2024-07-15 20:34:14.551002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.166 [2024-07-15 20:34:14.551013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.551020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.551026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7c8ae0) 00:28:36.166 [2024-07-15 20:34:14.551035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.166 [2024-07-15 20:34:14.551062] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f840, cid 4, qid 0 00:28:36.166 [2024-07-15 20:34:14.551090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f9c0, cid 5, qid 0 00:28:36.166 [2024-07-15 20:34:14.551325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.166 [2024-07-15 20:34:14.551341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.166 [2024-07-15 20:34:14.551348] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.166 [2024-07-15 20:34:14.551354] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7c8ae0): datao=0, datal=1024, cccid=4 00:28:36.166 [2024-07-15 20:34:14.551362] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x81f840) on tqpair(0x7c8ae0): expected_datao=0, payload_size=1024 00:28:36.166 [2024-07-15 20:34:14.551369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.551394] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.551402] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.551410] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.167 [2024-07-15 20:34:14.551419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.167 [2024-07-15 20:34:14.551425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.551432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f9c0) on tqpair=0x7c8ae0 00:28:36.167 [2024-07-15 20:34:14.592033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.167 [2024-07-15 20:34:14.592052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.167 [2024-07-15 20:34:14.592060] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.592066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f840) on tqpair=0x7c8ae0 00:28:36.167 [2024-07-15 20:34:14.592083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.592092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7c8ae0) 00:28:36.167 [2024-07-15 20:34:14.592103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.167 [2024-07-15 20:34:14.592133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f840, cid 4, qid 0 00:28:36.167 [2024-07-15 20:34:14.592294] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.167 [2024-07-15 20:34:14.592306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.167 [2024-07-15 20:34:14.592313] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.592320] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7c8ae0): datao=0, datal=3072, cccid=4 00:28:36.167 [2024-07-15 20:34:14.592332] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x81f840) on tqpair(0x7c8ae0): expected_datao=0, payload_size=3072 00:28:36.167 [2024-07-15 20:34:14.592340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.592350] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.592358] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.592396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.167 [2024-07-15 20:34:14.592407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.167 [2024-07-15 20:34:14.592414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.592420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f840) on tqpair=0x7c8ae0 00:28:36.167 [2024-07-15 20:34:14.592435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.592443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7c8ae0) 00:28:36.167 [2024-07-15 20:34:14.592453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.167 [2024-07-15 20:34:14.592480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f840, cid 4, qid 0 00:28:36.167 [2024-07-15 20:34:14.592637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.167 [2024-07-15 20:34:14.592649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.167 [2024-07-15 20:34:14.592656] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.592662] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7c8ae0): datao=0, datal=8, cccid=4 00:28:36.167 [2024-07-15 20:34:14.592670] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x81f840) on tqpair(0x7c8ae0): expected_datao=0, payload_size=8 00:28:36.167 [2024-07-15 20:34:14.592677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.592687] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.592694] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.633066] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.167 [2024-07-15 20:34:14.633084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.167 [2024-07-15 20:34:14.633091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.167 [2024-07-15 20:34:14.633098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f840) on tqpair=0x7c8ae0 00:28:36.167 ===================================================== 00:28:36.167 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:36.167 ===================================================== 00:28:36.167 Controller Capabilities/Features 00:28:36.167 ================================ 00:28:36.167 Vendor ID: 0000 00:28:36.167 Subsystem Vendor ID: 0000 00:28:36.167 Serial Number: .................... 00:28:36.167 Model Number: ........................................ 00:28:36.167 Firmware Version: 24.09 00:28:36.167 Recommended Arb Burst: 0 00:28:36.167 IEEE OUI Identifier: 00 00 00 00:28:36.167 Multi-path I/O 00:28:36.167 May have multiple subsystem ports: No 00:28:36.167 May have multiple controllers: No 00:28:36.167 Associated with SR-IOV VF: No 00:28:36.167 Max Data Transfer Size: 131072 00:28:36.167 Max Number of Namespaces: 0 00:28:36.167 Max Number of I/O Queues: 1024 00:28:36.167 NVMe Specification Version (VS): 1.3 00:28:36.167 NVMe Specification Version (Identify): 1.3 00:28:36.167 Maximum Queue Entries: 128 00:28:36.167 Contiguous Queues Required: Yes 00:28:36.167 Arbitration Mechanisms Supported 00:28:36.167 Weighted Round Robin: Not Supported 00:28:36.167 Vendor Specific: Not Supported 00:28:36.167 Reset Timeout: 15000 ms 00:28:36.167 Doorbell Stride: 4 bytes 00:28:36.167 NVM Subsystem Reset: Not Supported 00:28:36.167 Command Sets Supported 00:28:36.167 NVM Command Set: Supported 00:28:36.167 Boot Partition: Not Supported 00:28:36.167 Memory Page Size Minimum: 4096 bytes 00:28:36.167 Memory Page Size Maximum: 4096 bytes 00:28:36.167 Persistent Memory Region: Not Supported 00:28:36.167 Optional Asynchronous Events Supported 00:28:36.167 Namespace Attribute Notices: Not Supported 00:28:36.167 Firmware Activation Notices: Not Supported 00:28:36.167 ANA Change Notices: Not Supported 00:28:36.167 PLE Aggregate Log Change Notices: Not Supported 00:28:36.167 LBA Status Info Alert Notices: Not Supported 00:28:36.167 EGE Aggregate Log Change Notices: Not Supported 00:28:36.167 Normal NVM Subsystem Shutdown event: Not Supported 00:28:36.167 Zone Descriptor Change Notices: Not Supported 00:28:36.167 Discovery Log Change Notices: Supported 00:28:36.167 Controller Attributes 00:28:36.167 128-bit Host Identifier: Not Supported 00:28:36.167 Non-Operational Permissive Mode: Not Supported 00:28:36.167 NVM Sets: Not Supported 00:28:36.167 Read Recovery Levels: Not Supported 00:28:36.167 Endurance Groups: Not Supported 00:28:36.167 Predictable Latency Mode: Not Supported 00:28:36.167 Traffic Based Keep ALive: Not Supported 00:28:36.168 Namespace Granularity: Not Supported 00:28:36.168 SQ Associations: Not Supported 00:28:36.168 UUID List: Not Supported 00:28:36.168 Multi-Domain Subsystem: Not Supported 00:28:36.168 Fixed Capacity Management: Not Supported 00:28:36.168 Variable Capacity Management: Not Supported 00:28:36.168 Delete Endurance Group: Not Supported 00:28:36.168 Delete NVM Set: Not Supported 00:28:36.168 Extended LBA Formats Supported: Not Supported 00:28:36.168 Flexible Data Placement Supported: Not Supported 00:28:36.168 00:28:36.168 Controller Memory Buffer Support 00:28:36.168 ================================ 00:28:36.168 Supported: No 00:28:36.168 00:28:36.168 Persistent Memory Region Support 00:28:36.168 ================================ 00:28:36.168 Supported: No 00:28:36.168 00:28:36.168 Admin Command Set Attributes 00:28:36.168 ============================ 00:28:36.168 Security Send/Receive: Not Supported 00:28:36.168 Format NVM: Not Supported 00:28:36.168 Firmware Activate/Download: Not Supported 00:28:36.168 Namespace Management: Not Supported 00:28:36.168 Device Self-Test: Not Supported 00:28:36.168 Directives: Not Supported 00:28:36.168 NVMe-MI: Not Supported 00:28:36.168 Virtualization Management: Not Supported 00:28:36.168 Doorbell Buffer Config: Not Supported 00:28:36.168 Get LBA Status Capability: Not Supported 00:28:36.168 Command & Feature Lockdown Capability: Not Supported 00:28:36.168 Abort Command Limit: 1 00:28:36.168 Async Event Request Limit: 4 00:28:36.168 Number of Firmware Slots: N/A 00:28:36.168 Firmware Slot 1 Read-Only: N/A 00:28:36.168 Firmware Activation Without Reset: N/A 00:28:36.168 Multiple Update Detection Support: N/A 00:28:36.168 Firmware Update Granularity: No Information Provided 00:28:36.168 Per-Namespace SMART Log: No 00:28:36.168 Asymmetric Namespace Access Log Page: Not Supported 00:28:36.168 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:36.168 Command Effects Log Page: Not Supported 00:28:36.168 Get Log Page Extended Data: Supported 00:28:36.168 Telemetry Log Pages: Not Supported 00:28:36.168 Persistent Event Log Pages: Not Supported 00:28:36.168 Supported Log Pages Log Page: May Support 00:28:36.168 Commands Supported & Effects Log Page: Not Supported 00:28:36.168 Feature Identifiers & Effects Log Page:May Support 00:28:36.168 NVMe-MI Commands & Effects Log Page: May Support 00:28:36.168 Data Area 4 for Telemetry Log: Not Supported 00:28:36.168 Error Log Page Entries Supported: 128 00:28:36.168 Keep Alive: Not Supported 00:28:36.168 00:28:36.168 NVM Command Set Attributes 00:28:36.168 ========================== 00:28:36.168 Submission Queue Entry Size 00:28:36.168 Max: 1 00:28:36.168 Min: 1 00:28:36.168 Completion Queue Entry Size 00:28:36.168 Max: 1 00:28:36.168 Min: 1 00:28:36.168 Number of Namespaces: 0 00:28:36.168 Compare Command: Not Supported 00:28:36.168 Write Uncorrectable Command: Not Supported 00:28:36.168 Dataset Management Command: Not Supported 00:28:36.168 Write Zeroes Command: Not Supported 00:28:36.168 Set Features Save Field: Not Supported 00:28:36.168 Reservations: Not Supported 00:28:36.168 Timestamp: Not Supported 00:28:36.168 Copy: Not Supported 00:28:36.168 Volatile Write Cache: Not Present 00:28:36.168 Atomic Write Unit (Normal): 1 00:28:36.168 Atomic Write Unit (PFail): 1 00:28:36.168 Atomic Compare & Write Unit: 1 00:28:36.168 Fused Compare & Write: Supported 00:28:36.168 Scatter-Gather List 00:28:36.168 SGL Command Set: Supported 00:28:36.168 SGL Keyed: Supported 00:28:36.168 SGL Bit Bucket Descriptor: Not Supported 00:28:36.168 SGL Metadata Pointer: Not Supported 00:28:36.168 Oversized SGL: Not Supported 00:28:36.168 SGL Metadata Address: Not Supported 00:28:36.168 SGL Offset: Supported 00:28:36.168 Transport SGL Data Block: Not Supported 00:28:36.168 Replay Protected Memory Block: Not Supported 00:28:36.168 00:28:36.168 Firmware Slot Information 00:28:36.168 ========================= 00:28:36.168 Active slot: 0 00:28:36.168 00:28:36.168 00:28:36.168 Error Log 00:28:36.168 ========= 00:28:36.168 00:28:36.168 Active Namespaces 00:28:36.168 ================= 00:28:36.168 Discovery Log Page 00:28:36.168 ================== 00:28:36.168 Generation Counter: 2 00:28:36.168 Number of Records: 2 00:28:36.168 Record Format: 0 00:28:36.168 00:28:36.168 Discovery Log Entry 0 00:28:36.168 ---------------------- 00:28:36.168 Transport Type: 3 (TCP) 00:28:36.168 Address Family: 1 (IPv4) 00:28:36.168 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:36.168 Entry Flags: 00:28:36.168 Duplicate Returned Information: 1 00:28:36.168 Explicit Persistent Connection Support for Discovery: 1 00:28:36.168 Transport Requirements: 00:28:36.168 Secure Channel: Not Required 00:28:36.168 Port ID: 0 (0x0000) 00:28:36.168 Controller ID: 65535 (0xffff) 00:28:36.168 Admin Max SQ Size: 128 00:28:36.168 Transport Service Identifier: 4420 00:28:36.168 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:36.168 Transport Address: 10.0.0.2 00:28:36.168 Discovery Log Entry 1 00:28:36.168 ---------------------- 00:28:36.168 Transport Type: 3 (TCP) 00:28:36.168 Address Family: 1 (IPv4) 00:28:36.168 Subsystem Type: 2 (NVM Subsystem) 00:28:36.168 Entry Flags: 00:28:36.168 Duplicate Returned Information: 0 00:28:36.168 Explicit Persistent Connection Support for Discovery: 0 00:28:36.168 Transport Requirements: 00:28:36.168 Secure Channel: Not Required 00:28:36.168 Port ID: 0 (0x0000) 00:28:36.169 Controller ID: 65535 (0xffff) 00:28:36.169 Admin Max SQ Size: 128 00:28:36.169 Transport Service Identifier: 4420 00:28:36.169 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:36.169 Transport Address: 10.0.0.2 [2024-07-15 20:34:14.633211] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:36.169 [2024-07-15 20:34:14.633233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f240) on tqpair=0x7c8ae0 00:28:36.169 [2024-07-15 20:34:14.633245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.169 [2024-07-15 20:34:14.633255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f3c0) on tqpair=0x7c8ae0 00:28:36.169 [2024-07-15 20:34:14.633262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.169 [2024-07-15 20:34:14.633271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f540) on tqpair=0x7c8ae0 00:28:36.169 [2024-07-15 20:34:14.633278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.169 [2024-07-15 20:34:14.633302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f6c0) on tqpair=0x7c8ae0 00:28:36.169 [2024-07-15 20:34:14.633310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.169 [2024-07-15 20:34:14.633328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.633336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.633346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c8ae0) 00:28:36.169 [2024-07-15 20:34:14.633372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.169 [2024-07-15 20:34:14.633396] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f6c0, cid 3, qid 0 00:28:36.169 [2024-07-15 20:34:14.633597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.169 [2024-07-15 20:34:14.633613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.169 [2024-07-15 20:34:14.633620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.633626] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f6c0) on tqpair=0x7c8ae0 00:28:36.169 [2024-07-15 20:34:14.633638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.633646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.633653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c8ae0) 00:28:36.169 [2024-07-15 20:34:14.633663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.169 [2024-07-15 20:34:14.633690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f6c0, cid 3, qid 0 00:28:36.169 [2024-07-15 20:34:14.633841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.169 [2024-07-15 20:34:14.633854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.169 [2024-07-15 20:34:14.633861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.633867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f6c0) on tqpair=0x7c8ae0 00:28:36.169 [2024-07-15 20:34:14.633883] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:36.169 [2024-07-15 20:34:14.633892] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:36.169 [2024-07-15 20:34:14.633908] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.633917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.633923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c8ae0) 00:28:36.169 [2024-07-15 20:34:14.633934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.169 [2024-07-15 20:34:14.633955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f6c0, cid 3, qid 0 00:28:36.169 [2024-07-15 20:34:14.634132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.169 [2024-07-15 20:34:14.634147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.169 [2024-07-15 20:34:14.634154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.634161] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f6c0) on tqpair=0x7c8ae0 00:28:36.169 [2024-07-15 20:34:14.634178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.634187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.634194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c8ae0) 00:28:36.169 [2024-07-15 20:34:14.634204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.169 [2024-07-15 20:34:14.634225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f6c0, cid 3, qid 0 00:28:36.169 [2024-07-15 20:34:14.634358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.169 [2024-07-15 20:34:14.634370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.169 [2024-07-15 20:34:14.634377] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.634383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f6c0) on tqpair=0x7c8ae0 00:28:36.169 [2024-07-15 20:34:14.634403] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.634413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.634420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c8ae0) 00:28:36.169 [2024-07-15 20:34:14.634430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.169 [2024-07-15 20:34:14.634451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f6c0, cid 3, qid 0 00:28:36.169 [2024-07-15 20:34:14.634590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.169 [2024-07-15 20:34:14.634605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.169 [2024-07-15 20:34:14.634612] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.634619] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f6c0) on tqpair=0x7c8ae0 00:28:36.169 [2024-07-15 20:34:14.634635] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.634644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.169 [2024-07-15 20:34:14.634650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c8ae0) 00:28:36.169 [2024-07-15 20:34:14.634661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.169 [2024-07-15 20:34:14.634682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f6c0, cid 3, qid 0 00:28:36.169 [2024-07-15 20:34:14.634831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.169 [2024-07-15 20:34:14.634843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.169 [2024-07-15 20:34:14.634850] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.170 [2024-07-15 20:34:14.634857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f6c0) on tqpair=0x7c8ae0 00:28:36.170 [2024-07-15 20:34:14.634872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.170 [2024-07-15 20:34:14.638892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.170 [2024-07-15 20:34:14.638900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c8ae0) 00:28:36.170 [2024-07-15 20:34:14.638911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.170 [2024-07-15 20:34:14.638933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81f6c0, cid 3, qid 0 00:28:36.170 [2024-07-15 20:34:14.639105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.170 [2024-07-15 20:34:14.639118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.170 [2024-07-15 20:34:14.639125] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.170 [2024-07-15 20:34:14.639132] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81f6c0) on tqpair=0x7c8ae0 00:28:36.170 [2024-07-15 20:34:14.639145] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:28:36.170 00:28:36.170 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:36.170 [2024-07-15 20:34:14.673613] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:28:36.170 [2024-07-15 20:34:14.673657] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4147266 ] 00:28:36.170 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.433 [2024-07-15 20:34:14.705791] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:36.433 [2024-07-15 20:34:14.705844] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:36.433 [2024-07-15 20:34:14.705854] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:36.433 [2024-07-15 20:34:14.705890] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:36.433 [2024-07-15 20:34:14.705901] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:36.433 [2024-07-15 20:34:14.709909] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:36.433 [2024-07-15 20:34:14.709963] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20feae0 0 00:28:36.433 [2024-07-15 20:34:14.716901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:36.433 [2024-07-15 20:34:14.716920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:36.433 [2024-07-15 20:34:14.716928] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:36.433 [2024-07-15 20:34:14.716934] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:36.433 [2024-07-15 20:34:14.716987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.717000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.717007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20feae0) 00:28:36.433 [2024-07-15 20:34:14.717021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:36.433 [2024-07-15 20:34:14.717047] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155240, cid 0, qid 0 00:28:36.433 [2024-07-15 20:34:14.723890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.433 [2024-07-15 20:34:14.723907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.433 [2024-07-15 20:34:14.723914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.723937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155240) on tqpair=0x20feae0 00:28:36.433 [2024-07-15 20:34:14.723964] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:36.433 [2024-07-15 20:34:14.723975] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:36.433 [2024-07-15 20:34:14.723985] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:36.433 [2024-07-15 20:34:14.724003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.724011] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.724018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20feae0) 00:28:36.433 [2024-07-15 20:34:14.724029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.433 [2024-07-15 20:34:14.724053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155240, cid 0, qid 0 00:28:36.433 [2024-07-15 20:34:14.724230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.433 [2024-07-15 20:34:14.724246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.433 [2024-07-15 20:34:14.724252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.724259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155240) on tqpair=0x20feae0 00:28:36.433 [2024-07-15 20:34:14.724267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:36.433 [2024-07-15 20:34:14.724281] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:36.433 [2024-07-15 20:34:14.724293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.724303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.724314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20feae0) 00:28:36.433 [2024-07-15 20:34:14.724327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.433 [2024-07-15 20:34:14.724349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155240, cid 0, qid 0 00:28:36.433 [2024-07-15 20:34:14.724537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.433 [2024-07-15 20:34:14.724552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.433 [2024-07-15 20:34:14.724559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.724566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155240) on tqpair=0x20feae0 00:28:36.433 [2024-07-15 20:34:14.724574] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:36.433 [2024-07-15 20:34:14.724588] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:36.433 [2024-07-15 20:34:14.724601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.724608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.724615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20feae0) 00:28:36.433 [2024-07-15 20:34:14.724626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.433 [2024-07-15 20:34:14.724648] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155240, cid 0, qid 0 00:28:36.433 [2024-07-15 20:34:14.724833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.433 [2024-07-15 20:34:14.724848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.433 [2024-07-15 20:34:14.724855] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.724862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155240) on tqpair=0x20feae0 00:28:36.433 [2024-07-15 20:34:14.724870] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:36.433 [2024-07-15 20:34:14.724896] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.724907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.724913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20feae0) 00:28:36.433 [2024-07-15 20:34:14.724924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.433 [2024-07-15 20:34:14.724945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155240, cid 0, qid 0 00:28:36.433 [2024-07-15 20:34:14.725138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.433 [2024-07-15 20:34:14.725150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.433 [2024-07-15 20:34:14.725156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.725163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155240) on tqpair=0x20feae0 00:28:36.433 [2024-07-15 20:34:14.725171] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:36.433 [2024-07-15 20:34:14.725179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:36.433 [2024-07-15 20:34:14.725192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:36.433 [2024-07-15 20:34:14.725302] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:36.433 [2024-07-15 20:34:14.725310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:36.433 [2024-07-15 20:34:14.725326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.725334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.725341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20feae0) 00:28:36.433 [2024-07-15 20:34:14.725352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.433 [2024-07-15 20:34:14.725373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155240, cid 0, qid 0 00:28:36.433 [2024-07-15 20:34:14.725551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.433 [2024-07-15 20:34:14.725567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.433 [2024-07-15 20:34:14.725574] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.725582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155240) on tqpair=0x20feae0 00:28:36.433 [2024-07-15 20:34:14.725591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:36.433 [2024-07-15 20:34:14.725608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.725617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.433 [2024-07-15 20:34:14.725624] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20feae0) 00:28:36.433 [2024-07-15 20:34:14.725634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.433 [2024-07-15 20:34:14.725657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155240, cid 0, qid 0 00:28:36.433 [2024-07-15 20:34:14.725794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.434 [2024-07-15 20:34:14.725810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.434 [2024-07-15 20:34:14.725816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.725823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155240) on tqpair=0x20feae0 00:28:36.434 [2024-07-15 20:34:14.725831] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:36.434 [2024-07-15 20:34:14.725839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:36.434 [2024-07-15 20:34:14.725853] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:36.434 [2024-07-15 20:34:14.725871] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:36.434 [2024-07-15 20:34:14.725892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.725900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20feae0) 00:28:36.434 [2024-07-15 20:34:14.725911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.434 [2024-07-15 20:34:14.725933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155240, cid 0, qid 0 00:28:36.434 [2024-07-15 20:34:14.726148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.434 [2024-07-15 20:34:14.726164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.434 [2024-07-15 20:34:14.726171] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726177] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20feae0): datao=0, datal=4096, cccid=0 00:28:36.434 [2024-07-15 20:34:14.726185] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2155240) on tqpair(0x20feae0): expected_datao=0, payload_size=4096 00:28:36.434 [2024-07-15 20:34:14.726193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726207] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726216] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.434 [2024-07-15 20:34:14.726310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.434 [2024-07-15 20:34:14.726317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155240) on tqpair=0x20feae0 00:28:36.434 [2024-07-15 20:34:14.726334] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:36.434 [2024-07-15 20:34:14.726346] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:36.434 [2024-07-15 20:34:14.726355] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:36.434 [2024-07-15 20:34:14.726362] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:36.434 [2024-07-15 20:34:14.726370] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:36.434 [2024-07-15 20:34:14.726378] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:36.434 [2024-07-15 20:34:14.726392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:36.434 [2024-07-15 20:34:14.726404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20feae0) 00:28:36.434 [2024-07-15 20:34:14.726444] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:36.434 [2024-07-15 20:34:14.726465] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155240, cid 0, qid 0 00:28:36.434 [2024-07-15 20:34:14.726705] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.434 [2024-07-15 20:34:14.726721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.434 [2024-07-15 20:34:14.726728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155240) on tqpair=0x20feae0 00:28:36.434 [2024-07-15 20:34:14.726745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20feae0) 00:28:36.434 [2024-07-15 20:34:14.726769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.434 [2024-07-15 20:34:14.726779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20feae0) 00:28:36.434 [2024-07-15 20:34:14.726816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.434 [2024-07-15 20:34:14.726827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726833] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20feae0) 00:28:36.434 [2024-07-15 20:34:14.726848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.434 [2024-07-15 20:34:14.726857] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726890] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20feae0) 00:28:36.434 [2024-07-15 20:34:14.726908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.434 [2024-07-15 20:34:14.726917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:36.434 [2024-07-15 20:34:14.726937] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:36.434 [2024-07-15 20:34:14.726950] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.726957] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20feae0) 00:28:36.434 [2024-07-15 20:34:14.726967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.434 [2024-07-15 20:34:14.726990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155240, cid 0, qid 0 00:28:36.434 [2024-07-15 20:34:14.727002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21553c0, cid 1, qid 0 00:28:36.434 [2024-07-15 20:34:14.727009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155540, cid 2, qid 0 00:28:36.434 [2024-07-15 20:34:14.727017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21556c0, cid 3, qid 0 00:28:36.434 [2024-07-15 20:34:14.727025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155840, cid 4, qid 0 00:28:36.434 [2024-07-15 20:34:14.727260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.434 [2024-07-15 20:34:14.727275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.434 [2024-07-15 20:34:14.727282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.727289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155840) on tqpair=0x20feae0 00:28:36.434 [2024-07-15 20:34:14.727297] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:36.434 [2024-07-15 20:34:14.727321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:36.434 [2024-07-15 20:34:14.727335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:36.434 [2024-07-15 20:34:14.727345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:36.434 [2024-07-15 20:34:14.727356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.727363] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.727369] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20feae0) 00:28:36.434 [2024-07-15 20:34:14.727379] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:36.434 [2024-07-15 20:34:14.727405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155840, cid 4, qid 0 00:28:36.434 [2024-07-15 20:34:14.727606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.434 [2024-07-15 20:34:14.727621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.434 [2024-07-15 20:34:14.727628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.727635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155840) on tqpair=0x20feae0 00:28:36.434 [2024-07-15 20:34:14.727699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:36.434 [2024-07-15 20:34:14.727716] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:36.434 [2024-07-15 20:34:14.727750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.727758] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20feae0) 00:28:36.434 [2024-07-15 20:34:14.727769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.434 [2024-07-15 20:34:14.727790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155840, cid 4, qid 0 00:28:36.434 [2024-07-15 20:34:14.731891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.434 [2024-07-15 20:34:14.731908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.434 [2024-07-15 20:34:14.731915] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.731936] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20feae0): datao=0, datal=4096, cccid=4 00:28:36.434 [2024-07-15 20:34:14.731945] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2155840) on tqpair(0x20feae0): expected_datao=0, payload_size=4096 00:28:36.434 [2024-07-15 20:34:14.731952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.731963] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.731971] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.731979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.434 [2024-07-15 20:34:14.731988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.434 [2024-07-15 20:34:14.731994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.732001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155840) on tqpair=0x20feae0 00:28:36.434 [2024-07-15 20:34:14.732015] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:36.434 [2024-07-15 20:34:14.732032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:36.434 [2024-07-15 20:34:14.732051] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:36.434 [2024-07-15 20:34:14.732064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.434 [2024-07-15 20:34:14.732072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20feae0) 00:28:36.435 [2024-07-15 20:34:14.732082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.435 [2024-07-15 20:34:14.732105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155840, cid 4, qid 0 00:28:36.435 [2024-07-15 20:34:14.732348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.435 [2024-07-15 20:34:14.732364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.435 [2024-07-15 20:34:14.732371] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.732377] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20feae0): datao=0, datal=4096, cccid=4 00:28:36.435 [2024-07-15 20:34:14.732385] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2155840) on tqpair(0x20feae0): expected_datao=0, payload_size=4096 00:28:36.435 [2024-07-15 20:34:14.732392] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.732402] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.732410] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.732488] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.435 [2024-07-15 20:34:14.732503] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.435 [2024-07-15 20:34:14.732509] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.732516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155840) on tqpair=0x20feae0 00:28:36.435 [2024-07-15 20:34:14.732538] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:36.435 [2024-07-15 20:34:14.732558] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:36.435 [2024-07-15 20:34:14.732572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.732579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20feae0) 00:28:36.435 [2024-07-15 20:34:14.732590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.435 [2024-07-15 20:34:14.732612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155840, cid 4, qid 0 00:28:36.435 [2024-07-15 20:34:14.732814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.435 [2024-07-15 20:34:14.732827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.435 [2024-07-15 20:34:14.732833] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.732840] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20feae0): datao=0, datal=4096, cccid=4 00:28:36.435 [2024-07-15 20:34:14.732847] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2155840) on tqpair(0x20feae0): expected_datao=0, payload_size=4096 00:28:36.435 [2024-07-15 20:34:14.732855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.732865] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.732873] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.732960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.435 [2024-07-15 20:34:14.732975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.435 [2024-07-15 20:34:14.732982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.732989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155840) on tqpair=0x20feae0 00:28:36.435 [2024-07-15 20:34:14.733001] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:36.435 [2024-07-15 20:34:14.733015] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:36.435 [2024-07-15 20:34:14.733030] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:36.435 [2024-07-15 20:34:14.733041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:36.435 [2024-07-15 20:34:14.733050] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:36.435 [2024-07-15 20:34:14.733059] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:36.435 [2024-07-15 20:34:14.733068] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:36.435 [2024-07-15 20:34:14.733076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:36.435 [2024-07-15 20:34:14.733084] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:36.435 [2024-07-15 20:34:14.733102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.733112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20feae0) 00:28:36.435 [2024-07-15 20:34:14.733122] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.435 [2024-07-15 20:34:14.733140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.733148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.733155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20feae0) 00:28:36.435 [2024-07-15 20:34:14.733164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.435 [2024-07-15 20:34:14.733204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155840, cid 4, qid 0 00:28:36.435 [2024-07-15 20:34:14.733216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21559c0, cid 5, qid 0 00:28:36.435 [2024-07-15 20:34:14.733450] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.435 [2024-07-15 20:34:14.733463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.435 [2024-07-15 20:34:14.733470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.733476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155840) on tqpair=0x20feae0 00:28:36.435 [2024-07-15 20:34:14.733487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.435 [2024-07-15 20:34:14.733496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.435 [2024-07-15 20:34:14.733502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.733509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21559c0) on tqpair=0x20feae0 00:28:36.435 [2024-07-15 20:34:14.733524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.733533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20feae0) 00:28:36.435 [2024-07-15 20:34:14.733544] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.435 [2024-07-15 20:34:14.733579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21559c0, cid 5, qid 0 00:28:36.435 [2024-07-15 20:34:14.733822] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.435 [2024-07-15 20:34:14.733839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.435 [2024-07-15 20:34:14.733845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.733852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21559c0) on tqpair=0x20feae0 00:28:36.435 [2024-07-15 20:34:14.733868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.733884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20feae0) 00:28:36.435 [2024-07-15 20:34:14.733895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.435 [2024-07-15 20:34:14.733917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21559c0, cid 5, qid 0 00:28:36.435 [2024-07-15 20:34:14.734119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.435 [2024-07-15 20:34:14.734131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.435 [2024-07-15 20:34:14.734137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.734144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21559c0) on tqpair=0x20feae0 00:28:36.435 [2024-07-15 20:34:14.734159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.734168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20feae0) 00:28:36.435 [2024-07-15 20:34:14.734179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.435 [2024-07-15 20:34:14.734199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21559c0, cid 5, qid 0 00:28:36.435 [2024-07-15 20:34:14.734337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.435 [2024-07-15 20:34:14.734352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.435 [2024-07-15 20:34:14.734362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.734369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21559c0) on tqpair=0x20feae0 00:28:36.435 [2024-07-15 20:34:14.734392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.734403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20feae0) 00:28:36.435 [2024-07-15 20:34:14.734414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.435 [2024-07-15 20:34:14.734426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.734434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20feae0) 00:28:36.435 [2024-07-15 20:34:14.734443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.435 [2024-07-15 20:34:14.734455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.734462] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x20feae0) 00:28:36.435 [2024-07-15 20:34:14.734471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.435 [2024-07-15 20:34:14.734483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.734505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x20feae0) 00:28:36.435 [2024-07-15 20:34:14.734515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.435 [2024-07-15 20:34:14.734537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21559c0, cid 5, qid 0 00:28:36.435 [2024-07-15 20:34:14.734547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155840, cid 4, qid 0 00:28:36.435 [2024-07-15 20:34:14.734572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155b40, cid 6, qid 0 00:28:36.435 [2024-07-15 20:34:14.734579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155cc0, cid 7, qid 0 00:28:36.435 [2024-07-15 20:34:14.734925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.435 [2024-07-15 20:34:14.734941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.435 [2024-07-15 20:34:14.734948] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.734955] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20feae0): datao=0, datal=8192, cccid=5 00:28:36.435 [2024-07-15 20:34:14.734963] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21559c0) on tqpair(0x20feae0): expected_datao=0, payload_size=8192 00:28:36.435 [2024-07-15 20:34:14.734970] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.734981] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.734988] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.435 [2024-07-15 20:34:14.734997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.435 [2024-07-15 20:34:14.735005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.436 [2024-07-15 20:34:14.735012] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735018] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20feae0): datao=0, datal=512, cccid=4 00:28:36.436 [2024-07-15 20:34:14.735026] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2155840) on tqpair(0x20feae0): expected_datao=0, payload_size=512 00:28:36.436 [2024-07-15 20:34:14.735033] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735042] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735049] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735061] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.436 [2024-07-15 20:34:14.735070] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.436 [2024-07-15 20:34:14.735077] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735083] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20feae0): datao=0, datal=512, cccid=6 00:28:36.436 [2024-07-15 20:34:14.735090] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2155b40) on tqpair(0x20feae0): expected_datao=0, payload_size=512 00:28:36.436 [2024-07-15 20:34:14.735098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735107] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735114] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.436 [2024-07-15 20:34:14.735131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.436 [2024-07-15 20:34:14.735137] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735143] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20feae0): datao=0, datal=4096, cccid=7 00:28:36.436 [2024-07-15 20:34:14.735151] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2155cc0) on tqpair(0x20feae0): expected_datao=0, payload_size=4096 00:28:36.436 [2024-07-15 20:34:14.735158] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735168] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735175] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.436 [2024-07-15 20:34:14.735196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.436 [2024-07-15 20:34:14.735217] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21559c0) on tqpair=0x20feae0 00:28:36.436 [2024-07-15 20:34:14.735241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.436 [2024-07-15 20:34:14.735252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.436 [2024-07-15 20:34:14.735259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155840) on tqpair=0x20feae0 00:28:36.436 [2024-07-15 20:34:14.735279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.436 [2024-07-15 20:34:14.735289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.436 [2024-07-15 20:34:14.735296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155b40) on tqpair=0x20feae0 00:28:36.436 [2024-07-15 20:34:14.735312] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.436 [2024-07-15 20:34:14.735321] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.436 [2024-07-15 20:34:14.735327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.436 [2024-07-15 20:34:14.735334] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155cc0) on tqpair=0x20feae0 00:28:36.436 ===================================================== 00:28:36.436 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.436 ===================================================== 00:28:36.436 Controller Capabilities/Features 00:28:36.436 ================================ 00:28:36.436 Vendor ID: 8086 00:28:36.436 Subsystem Vendor ID: 8086 00:28:36.436 Serial Number: SPDK00000000000001 00:28:36.436 Model Number: SPDK bdev Controller 00:28:36.436 Firmware Version: 24.09 00:28:36.436 Recommended Arb Burst: 6 00:28:36.436 IEEE OUI Identifier: e4 d2 5c 00:28:36.436 Multi-path I/O 00:28:36.436 May have multiple subsystem ports: Yes 00:28:36.436 May have multiple controllers: Yes 00:28:36.436 Associated with SR-IOV VF: No 00:28:36.436 Max Data Transfer Size: 131072 00:28:36.436 Max Number of Namespaces: 32 00:28:36.436 Max Number of I/O Queues: 127 00:28:36.436 NVMe Specification Version (VS): 1.3 00:28:36.436 NVMe Specification Version (Identify): 1.3 00:28:36.436 Maximum Queue Entries: 128 00:28:36.436 Contiguous Queues Required: Yes 00:28:36.436 Arbitration Mechanisms Supported 00:28:36.436 Weighted Round Robin: Not Supported 00:28:36.436 Vendor Specific: Not Supported 00:28:36.436 Reset Timeout: 15000 ms 00:28:36.436 Doorbell Stride: 4 bytes 00:28:36.436 NVM Subsystem Reset: Not Supported 00:28:36.436 Command Sets Supported 00:28:36.436 NVM Command Set: Supported 00:28:36.436 Boot Partition: Not Supported 00:28:36.436 Memory Page Size Minimum: 4096 bytes 00:28:36.436 Memory Page Size Maximum: 4096 bytes 00:28:36.436 Persistent Memory Region: Not Supported 00:28:36.436 Optional Asynchronous Events Supported 00:28:36.436 Namespace Attribute Notices: Supported 00:28:36.436 Firmware Activation Notices: Not Supported 00:28:36.436 ANA Change Notices: Not Supported 00:28:36.436 PLE Aggregate Log Change Notices: Not Supported 00:28:36.436 LBA Status Info Alert Notices: Not Supported 00:28:36.436 EGE Aggregate Log Change Notices: Not Supported 00:28:36.436 Normal NVM Subsystem Shutdown event: Not Supported 00:28:36.436 Zone Descriptor Change Notices: Not Supported 00:28:36.436 Discovery Log Change Notices: Not Supported 00:28:36.436 Controller Attributes 00:28:36.436 128-bit Host Identifier: Supported 00:28:36.436 Non-Operational Permissive Mode: Not Supported 00:28:36.436 NVM Sets: Not Supported 00:28:36.436 Read Recovery Levels: Not Supported 00:28:36.436 Endurance Groups: Not Supported 00:28:36.436 Predictable Latency Mode: Not Supported 00:28:36.436 Traffic Based Keep ALive: Not Supported 00:28:36.436 Namespace Granularity: Not Supported 00:28:36.436 SQ Associations: Not Supported 00:28:36.436 UUID List: Not Supported 00:28:36.436 Multi-Domain Subsystem: Not Supported 00:28:36.436 Fixed Capacity Management: Not Supported 00:28:36.436 Variable Capacity Management: Not Supported 00:28:36.436 Delete Endurance Group: Not Supported 00:28:36.436 Delete NVM Set: Not Supported 00:28:36.436 Extended LBA Formats Supported: Not Supported 00:28:36.436 Flexible Data Placement Supported: Not Supported 00:28:36.436 00:28:36.436 Controller Memory Buffer Support 00:28:36.436 ================================ 00:28:36.436 Supported: No 00:28:36.436 00:28:36.436 Persistent Memory Region Support 00:28:36.436 ================================ 00:28:36.436 Supported: No 00:28:36.436 00:28:36.436 Admin Command Set Attributes 00:28:36.436 ============================ 00:28:36.436 Security Send/Receive: Not Supported 00:28:36.436 Format NVM: Not Supported 00:28:36.436 Firmware Activate/Download: Not Supported 00:28:36.436 Namespace Management: Not Supported 00:28:36.436 Device Self-Test: Not Supported 00:28:36.436 Directives: Not Supported 00:28:36.436 NVMe-MI: Not Supported 00:28:36.436 Virtualization Management: Not Supported 00:28:36.436 Doorbell Buffer Config: Not Supported 00:28:36.436 Get LBA Status Capability: Not Supported 00:28:36.436 Command & Feature Lockdown Capability: Not Supported 00:28:36.436 Abort Command Limit: 4 00:28:36.436 Async Event Request Limit: 4 00:28:36.436 Number of Firmware Slots: N/A 00:28:36.436 Firmware Slot 1 Read-Only: N/A 00:28:36.436 Firmware Activation Without Reset: N/A 00:28:36.436 Multiple Update Detection Support: N/A 00:28:36.436 Firmware Update Granularity: No Information Provided 00:28:36.436 Per-Namespace SMART Log: No 00:28:36.436 Asymmetric Namespace Access Log Page: Not Supported 00:28:36.436 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:36.436 Command Effects Log Page: Supported 00:28:36.436 Get Log Page Extended Data: Supported 00:28:36.436 Telemetry Log Pages: Not Supported 00:28:36.436 Persistent Event Log Pages: Not Supported 00:28:36.436 Supported Log Pages Log Page: May Support 00:28:36.436 Commands Supported & Effects Log Page: Not Supported 00:28:36.436 Feature Identifiers & Effects Log Page:May Support 00:28:36.436 NVMe-MI Commands & Effects Log Page: May Support 00:28:36.436 Data Area 4 for Telemetry Log: Not Supported 00:28:36.436 Error Log Page Entries Supported: 128 00:28:36.436 Keep Alive: Supported 00:28:36.436 Keep Alive Granularity: 10000 ms 00:28:36.436 00:28:36.436 NVM Command Set Attributes 00:28:36.436 ========================== 00:28:36.436 Submission Queue Entry Size 00:28:36.436 Max: 64 00:28:36.436 Min: 64 00:28:36.436 Completion Queue Entry Size 00:28:36.436 Max: 16 00:28:36.436 Min: 16 00:28:36.436 Number of Namespaces: 32 00:28:36.436 Compare Command: Supported 00:28:36.436 Write Uncorrectable Command: Not Supported 00:28:36.436 Dataset Management Command: Supported 00:28:36.436 Write Zeroes Command: Supported 00:28:36.436 Set Features Save Field: Not Supported 00:28:36.436 Reservations: Supported 00:28:36.436 Timestamp: Not Supported 00:28:36.436 Copy: Supported 00:28:36.436 Volatile Write Cache: Present 00:28:36.436 Atomic Write Unit (Normal): 1 00:28:36.436 Atomic Write Unit (PFail): 1 00:28:36.436 Atomic Compare & Write Unit: 1 00:28:36.436 Fused Compare & Write: Supported 00:28:36.436 Scatter-Gather List 00:28:36.436 SGL Command Set: Supported 00:28:36.436 SGL Keyed: Supported 00:28:36.436 SGL Bit Bucket Descriptor: Not Supported 00:28:36.436 SGL Metadata Pointer: Not Supported 00:28:36.436 Oversized SGL: Not Supported 00:28:36.436 SGL Metadata Address: Not Supported 00:28:36.436 SGL Offset: Supported 00:28:36.436 Transport SGL Data Block: Not Supported 00:28:36.437 Replay Protected Memory Block: Not Supported 00:28:36.437 00:28:36.437 Firmware Slot Information 00:28:36.437 ========================= 00:28:36.437 Active slot: 1 00:28:36.437 Slot 1 Firmware Revision: 24.09 00:28:36.437 00:28:36.437 00:28:36.437 Commands Supported and Effects 00:28:36.437 ============================== 00:28:36.437 Admin Commands 00:28:36.437 -------------- 00:28:36.437 Get Log Page (02h): Supported 00:28:36.437 Identify (06h): Supported 00:28:36.437 Abort (08h): Supported 00:28:36.437 Set Features (09h): Supported 00:28:36.437 Get Features (0Ah): Supported 00:28:36.437 Asynchronous Event Request (0Ch): Supported 00:28:36.437 Keep Alive (18h): Supported 00:28:36.437 I/O Commands 00:28:36.437 ------------ 00:28:36.437 Flush (00h): Supported LBA-Change 00:28:36.437 Write (01h): Supported LBA-Change 00:28:36.437 Read (02h): Supported 00:28:36.437 Compare (05h): Supported 00:28:36.437 Write Zeroes (08h): Supported LBA-Change 00:28:36.437 Dataset Management (09h): Supported LBA-Change 00:28:36.437 Copy (19h): Supported LBA-Change 00:28:36.437 00:28:36.437 Error Log 00:28:36.437 ========= 00:28:36.437 00:28:36.437 Arbitration 00:28:36.437 =========== 00:28:36.437 Arbitration Burst: 1 00:28:36.437 00:28:36.437 Power Management 00:28:36.437 ================ 00:28:36.437 Number of Power States: 1 00:28:36.437 Current Power State: Power State #0 00:28:36.437 Power State #0: 00:28:36.437 Max Power: 0.00 W 00:28:36.437 Non-Operational State: Operational 00:28:36.437 Entry Latency: Not Reported 00:28:36.437 Exit Latency: Not Reported 00:28:36.437 Relative Read Throughput: 0 00:28:36.437 Relative Read Latency: 0 00:28:36.437 Relative Write Throughput: 0 00:28:36.437 Relative Write Latency: 0 00:28:36.437 Idle Power: Not Reported 00:28:36.437 Active Power: Not Reported 00:28:36.437 Non-Operational Permissive Mode: Not Supported 00:28:36.437 00:28:36.437 Health Information 00:28:36.437 ================== 00:28:36.437 Critical Warnings: 00:28:36.437 Available Spare Space: OK 00:28:36.437 Temperature: OK 00:28:36.437 Device Reliability: OK 00:28:36.437 Read Only: No 00:28:36.437 Volatile Memory Backup: OK 00:28:36.437 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:36.437 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:36.437 Available Spare: 0% 00:28:36.437 Available Spare Threshold: 0% 00:28:36.437 Life Percentage Used:[2024-07-15 20:34:14.735444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.735456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x20feae0) 00:28:36.437 [2024-07-15 20:34:14.735467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.437 [2024-07-15 20:34:14.735490] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2155cc0, cid 7, qid 0 00:28:36.437 [2024-07-15 20:34:14.735680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.437 [2024-07-15 20:34:14.735696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.437 [2024-07-15 20:34:14.735706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.735714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155cc0) on tqpair=0x20feae0 00:28:36.437 [2024-07-15 20:34:14.735758] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:36.437 [2024-07-15 20:34:14.735777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155240) on tqpair=0x20feae0 00:28:36.437 [2024-07-15 20:34:14.735788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.437 [2024-07-15 20:34:14.735797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21553c0) on tqpair=0x20feae0 00:28:36.437 [2024-07-15 20:34:14.735804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.437 [2024-07-15 20:34:14.735828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2155540) on tqpair=0x20feae0 00:28:36.437 [2024-07-15 20:34:14.735836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.437 [2024-07-15 20:34:14.735844] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21556c0) on tqpair=0x20feae0 00:28:36.437 [2024-07-15 20:34:14.735851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.437 [2024-07-15 20:34:14.735863] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.735870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.736908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20feae0) 00:28:36.437 [2024-07-15 20:34:14.736920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.437 [2024-07-15 20:34:14.736948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21556c0, cid 3, qid 0 00:28:36.437 [2024-07-15 20:34:14.737142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.437 [2024-07-15 20:34:14.737157] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.437 [2024-07-15 20:34:14.737165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.737172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21556c0) on tqpair=0x20feae0 00:28:36.437 [2024-07-15 20:34:14.737183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.737193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.737199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20feae0) 00:28:36.437 [2024-07-15 20:34:14.737209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.437 [2024-07-15 20:34:14.737236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21556c0, cid 3, qid 0 00:28:36.437 [2024-07-15 20:34:14.737400] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.437 [2024-07-15 20:34:14.737411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.437 [2024-07-15 20:34:14.737418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.737425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21556c0) on tqpair=0x20feae0 00:28:36.437 [2024-07-15 20:34:14.737433] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:36.437 [2024-07-15 20:34:14.737440] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:36.437 [2024-07-15 20:34:14.737456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.737465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.737471] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20feae0) 00:28:36.437 [2024-07-15 20:34:14.737482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.437 [2024-07-15 20:34:14.737506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21556c0, cid 3, qid 0 00:28:36.437 [2024-07-15 20:34:14.737694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.437 [2024-07-15 20:34:14.737709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.437 [2024-07-15 20:34:14.737716] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.737722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21556c0) on tqpair=0x20feae0 00:28:36.437 [2024-07-15 20:34:14.737739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.737748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.737754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20feae0) 00:28:36.437 [2024-07-15 20:34:14.737765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.437 [2024-07-15 20:34:14.737785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21556c0, cid 3, qid 0 00:28:36.437 [2024-07-15 20:34:14.737928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.437 [2024-07-15 20:34:14.737944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.437 [2024-07-15 20:34:14.737951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.737958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21556c0) on tqpair=0x20feae0 00:28:36.437 [2024-07-15 20:34:14.737974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.437 [2024-07-15 20:34:14.737984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.438 [2024-07-15 20:34:14.737991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20feae0) 00:28:36.438 [2024-07-15 20:34:14.738001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.438 [2024-07-15 20:34:14.738023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21556c0, cid 3, qid 0 00:28:36.438 [2024-07-15 20:34:14.738158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.438 [2024-07-15 20:34:14.738173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.438 [2024-07-15 20:34:14.738181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.438 [2024-07-15 20:34:14.738189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21556c0) on tqpair=0x20feae0 00:28:36.438 [2024-07-15 20:34:14.738205] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.438 [2024-07-15 20:34:14.738215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.438 [2024-07-15 20:34:14.738221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20feae0) 00:28:36.438 [2024-07-15 20:34:14.738232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.438 [2024-07-15 20:34:14.738253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21556c0, cid 3, qid 0 00:28:36.438 [2024-07-15 20:34:14.738439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.438 [2024-07-15 20:34:14.738455] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.438 [2024-07-15 20:34:14.738461] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.438 [2024-07-15 20:34:14.738468] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21556c0) on tqpair=0x20feae0 00:28:36.438 [2024-07-15 20:34:14.738484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.438 [2024-07-15 20:34:14.738494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.438 [2024-07-15 20:34:14.738501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20feae0) 00:28:36.438 [2024-07-15 20:34:14.738512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.438 [2024-07-15 20:34:14.738550] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21556c0, cid 3, qid 0 00:28:36.438 [2024-07-15 20:34:14.740887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.438 [2024-07-15 20:34:14.740904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.438 [2024-07-15 20:34:14.740911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.438 [2024-07-15 20:34:14.740918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21556c0) on tqpair=0x20feae0 00:28:36.438 [2024-07-15 20:34:14.740951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.438 [2024-07-15 20:34:14.740961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.438 [2024-07-15 20:34:14.740967] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20feae0) 00:28:36.438 [2024-07-15 20:34:14.740978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.438 [2024-07-15 20:34:14.741000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21556c0, cid 3, qid 0 00:28:36.438 [2024-07-15 20:34:14.741176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.438 [2024-07-15 20:34:14.741191] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.438 [2024-07-15 20:34:14.741198] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.438 [2024-07-15 20:34:14.741205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21556c0) on tqpair=0x20feae0 00:28:36.438 [2024-07-15 20:34:14.741218] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 3 milliseconds 00:28:36.438 0% 00:28:36.438 Data Units Read: 0 00:28:36.438 Data Units Written: 0 00:28:36.438 Host Read Commands: 0 00:28:36.438 Host Write Commands: 0 00:28:36.438 Controller Busy Time: 0 minutes 00:28:36.438 Power Cycles: 0 00:28:36.438 Power On Hours: 0 hours 00:28:36.438 Unsafe Shutdowns: 0 00:28:36.438 Unrecoverable Media Errors: 0 00:28:36.438 Lifetime Error Log Entries: 0 00:28:36.438 Warning Temperature Time: 0 minutes 00:28:36.438 Critical Temperature Time: 0 minutes 00:28:36.438 00:28:36.438 Number of Queues 00:28:36.438 ================ 00:28:36.438 Number of I/O Submission Queues: 127 00:28:36.438 Number of I/O Completion Queues: 127 00:28:36.438 00:28:36.438 Active Namespaces 00:28:36.438 ================= 00:28:36.438 Namespace ID:1 00:28:36.438 Error Recovery Timeout: Unlimited 00:28:36.438 Command Set Identifier: NVM (00h) 00:28:36.438 Deallocate: Supported 00:28:36.438 Deallocated/Unwritten Error: Not Supported 00:28:36.438 Deallocated Read Value: Unknown 00:28:36.438 Deallocate in Write Zeroes: Not Supported 00:28:36.438 Deallocated Guard Field: 0xFFFF 00:28:36.438 Flush: Supported 00:28:36.438 Reservation: Supported 00:28:36.438 Namespace Sharing Capabilities: Multiple Controllers 00:28:36.438 Size (in LBAs): 131072 (0GiB) 00:28:36.438 Capacity (in LBAs): 131072 (0GiB) 00:28:36.438 Utilization (in LBAs): 131072 (0GiB) 00:28:36.438 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:36.438 EUI64: ABCDEF0123456789 00:28:36.438 UUID: f517b70f-e6fb-47ea-99f7-0ec061ee7a20 00:28:36.438 Thin Provisioning: Not Supported 00:28:36.438 Per-NS Atomic Units: Yes 00:28:36.438 Atomic Boundary Size (Normal): 0 00:28:36.438 Atomic Boundary Size (PFail): 0 00:28:36.438 Atomic Boundary Offset: 0 00:28:36.438 Maximum Single Source Range Length: 65535 00:28:36.438 Maximum Copy Length: 65535 00:28:36.438 Maximum Source Range Count: 1 00:28:36.438 NGUID/EUI64 Never Reused: No 00:28:36.438 Namespace Write Protected: No 00:28:36.438 Number of LBA Formats: 1 00:28:36.438 Current LBA Format: LBA Format #00 00:28:36.438 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:36.438 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:36.438 rmmod nvme_tcp 00:28:36.438 rmmod nvme_fabrics 00:28:36.438 rmmod nvme_keyring 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 4147241 ']' 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 4147241 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 4147241 ']' 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 4147241 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4147241 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4147241' 00:28:36.438 killing process with pid 4147241 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 4147241 00:28:36.438 20:34:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 4147241 00:28:36.698 20:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:36.698 20:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:36.698 20:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:36.698 20:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:36.698 20:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:36.698 20:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.698 20:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:36.698 20:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.230 20:34:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:39.230 00:28:39.230 real 0m5.138s 00:28:39.230 user 0m4.117s 00:28:39.230 sys 0m1.752s 00:28:39.230 20:34:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:39.230 20:34:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:39.230 ************************************ 00:28:39.230 END TEST nvmf_identify 00:28:39.230 ************************************ 00:28:39.230 20:34:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:39.230 20:34:17 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:39.230 20:34:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:39.230 20:34:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.230 20:34:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:39.230 ************************************ 00:28:39.230 START TEST nvmf_perf 00:28:39.230 ************************************ 00:28:39.230 20:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:39.230 * Looking for test storage... 00:28:39.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:39.230 20:34:17 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.230 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:39.230 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.230 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.230 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.230 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.230 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.230 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.230 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.230 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.230 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.230 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:39.231 20:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:41.134 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:41.134 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:41.134 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:41.134 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:41.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:28:41.134 00:28:41.134 --- 10.0.0.2 ping statistics --- 00:28:41.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.134 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:28:41.134 00:28:41.134 --- 10.0.0.1 ping statistics --- 00:28:41.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.134 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=4149273 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 4149273 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 4149273 ']' 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:41.134 20:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:41.134 [2024-07-15 20:34:19.397451] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:28:41.135 [2024-07-15 20:34:19.397523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.135 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.135 [2024-07-15 20:34:19.464236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:41.135 [2024-07-15 20:34:19.555705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.135 [2024-07-15 20:34:19.555766] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.135 [2024-07-15 20:34:19.555792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.135 [2024-07-15 20:34:19.555805] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.135 [2024-07-15 20:34:19.555817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.135 [2024-07-15 20:34:19.555926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.135 [2024-07-15 20:34:19.555968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:41.135 [2024-07-15 20:34:19.556041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:41.135 [2024-07-15 20:34:19.556043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.393 20:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:41.393 20:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:28:41.393 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:41.393 20:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:41.393 20:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:41.393 20:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.393 20:34:19 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:41.393 20:34:19 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:44.676 20:34:22 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:44.676 20:34:22 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:44.676 20:34:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:44.676 20:34:23 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:44.933 20:34:23 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:44.933 20:34:23 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:44.933 20:34:23 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:44.933 20:34:23 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:44.933 20:34:23 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:45.193 [2024-07-15 20:34:23.523313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.193 20:34:23 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.452 20:34:23 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:45.452 20:34:23 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:45.710 20:34:24 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:45.710 20:34:24 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:45.967 20:34:24 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.223 [2024-07-15 20:34:24.522990] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.223 20:34:24 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:46.481 20:34:24 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:46.481 20:34:24 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:46.481 20:34:24 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:46.481 20:34:24 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:47.859 Initializing NVMe Controllers 00:28:47.859 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:47.859 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:47.859 Initialization complete. Launching workers. 00:28:47.859 ======================================================== 00:28:47.859 Latency(us) 00:28:47.859 Device Information : IOPS MiB/s Average min max 00:28:47.859 PCIE (0000:88:00.0) NSID 1 from core 0: 85534.89 334.12 373.65 42.07 6276.02 00:28:47.859 ======================================================== 00:28:47.859 Total : 85534.89 334.12 373.65 42.07 6276.02 00:28:47.859 00:28:47.859 20:34:25 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:47.859 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.240 Initializing NVMe Controllers 00:28:49.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:49.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:49.240 Initialization complete. Launching workers. 00:28:49.240 ======================================================== 00:28:49.240 Latency(us) 00:28:49.240 Device Information : IOPS MiB/s Average min max 00:28:49.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 143.00 0.56 7227.53 239.42 45781.92 00:28:49.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 42.00 0.16 24177.85 7940.34 50874.42 00:28:49.240 ======================================================== 00:28:49.240 Total : 185.00 0.72 11075.71 239.42 50874.42 00:28:49.240 00:28:49.240 20:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:49.240 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.621 Initializing NVMe Controllers 00:28:50.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:50.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:50.621 Initialization complete. Launching workers. 00:28:50.621 ======================================================== 00:28:50.621 Latency(us) 00:28:50.621 Device Information : IOPS MiB/s Average min max 00:28:50.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8278.66 32.34 3870.01 607.27 9144.82 00:28:50.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3815.46 14.90 8426.78 6603.13 18878.59 00:28:50.621 ======================================================== 00:28:50.621 Total : 12094.13 47.24 5307.58 607.27 18878.59 00:28:50.621 00:28:50.621 20:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:50.621 20:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:50.621 20:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:50.621 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.154 Initializing NVMe Controllers 00:28:53.154 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.154 Controller IO queue size 128, less than required. 00:28:53.154 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.154 Controller IO queue size 128, less than required. 00:28:53.154 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:53.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:53.154 Initialization complete. Launching workers. 00:28:53.154 ======================================================== 00:28:53.154 Latency(us) 00:28:53.154 Device Information : IOPS MiB/s Average min max 00:28:53.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 957.16 239.29 136748.28 70819.75 233419.14 00:28:53.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 601.29 150.32 221709.82 119864.68 376090.74 00:28:53.155 ======================================================== 00:28:53.155 Total : 1558.45 389.61 169528.50 70819.75 376090.74 00:28:53.155 00:28:53.155 20:34:31 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:53.155 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.415 No valid NVMe controllers or AIO or URING devices found 00:28:53.415 Initializing NVMe Controllers 00:28:53.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.415 Controller IO queue size 128, less than required. 00:28:53.415 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.415 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:53.415 Controller IO queue size 128, less than required. 00:28:53.415 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.415 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:53.415 WARNING: Some requested NVMe devices were skipped 00:28:53.415 20:34:31 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:53.415 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.949 Initializing NVMe Controllers 00:28:55.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.949 Controller IO queue size 128, less than required. 00:28:55.949 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.949 Controller IO queue size 128, less than required. 00:28:55.949 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:55.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:55.949 Initialization complete. Launching workers. 00:28:55.949 00:28:55.949 ==================== 00:28:55.949 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:55.949 TCP transport: 00:28:55.949 polls: 32852 00:28:55.949 idle_polls: 9913 00:28:55.949 sock_completions: 22939 00:28:55.949 nvme_completions: 3953 00:28:55.949 submitted_requests: 5986 00:28:55.949 queued_requests: 1 00:28:55.949 00:28:55.949 ==================== 00:28:55.949 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:55.949 TCP transport: 00:28:55.949 polls: 33962 00:28:55.949 idle_polls: 10484 00:28:55.949 sock_completions: 23478 00:28:55.950 nvme_completions: 3245 00:28:55.950 submitted_requests: 4848 00:28:55.950 queued_requests: 1 00:28:55.950 ======================================================== 00:28:55.950 Latency(us) 00:28:55.950 Device Information : IOPS MiB/s Average min max 00:28:55.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 987.94 246.99 134728.27 69327.14 221531.52 00:28:55.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 810.95 202.74 162559.70 70196.44 254737.72 00:28:55.950 ======================================================== 00:28:55.950 Total : 1798.89 449.72 147274.85 69327.14 254737.72 00:28:55.950 00:28:55.950 20:34:34 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:55.950 20:34:34 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:56.208 20:34:34 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:56.208 20:34:34 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:56.208 20:34:34 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:59.493 20:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=ca29b88b-e875-4801-9f88-a08397a3a74e 00:28:59.493 20:34:37 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb ca29b88b-e875-4801-9f88-a08397a3a74e 00:28:59.493 20:34:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=ca29b88b-e875-4801-9f88-a08397a3a74e 00:28:59.493 20:34:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:59.493 20:34:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:59.493 20:34:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:59.493 20:34:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:59.493 20:34:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:59.493 { 00:28:59.493 "uuid": "ca29b88b-e875-4801-9f88-a08397a3a74e", 00:28:59.493 "name": "lvs_0", 00:28:59.493 "base_bdev": "Nvme0n1", 00:28:59.493 "total_data_clusters": 238234, 00:28:59.493 "free_clusters": 238234, 00:28:59.493 "block_size": 512, 00:28:59.493 "cluster_size": 4194304 00:28:59.493 } 00:28:59.493 ]' 00:28:59.493 20:34:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="ca29b88b-e875-4801-9f88-a08397a3a74e") .free_clusters' 00:28:59.751 20:34:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:59.751 20:34:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="ca29b88b-e875-4801-9f88-a08397a3a74e") .cluster_size' 00:28:59.751 20:34:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:59.751 20:34:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:59.751 20:34:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:59.751 952936 00:28:59.751 20:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:59.751 20:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:59.751 20:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ca29b88b-e875-4801-9f88-a08397a3a74e lbd_0 20480 00:29:00.318 20:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=f163ee86-a3f3-4850-9505-36617611651a 00:29:00.318 20:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore f163ee86-a3f3-4850-9505-36617611651a lvs_n_0 00:29:01.255 20:34:39 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=e7bb1bc8-4785-4911-98c1-4bae604d7bad 00:29:01.255 20:34:39 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb e7bb1bc8-4785-4911-98c1-4bae604d7bad 00:29:01.255 20:34:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=e7bb1bc8-4785-4911-98c1-4bae604d7bad 00:29:01.255 20:34:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:01.255 20:34:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:01.255 20:34:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:01.255 20:34:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:01.255 20:34:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:01.255 { 00:29:01.255 "uuid": "ca29b88b-e875-4801-9f88-a08397a3a74e", 00:29:01.255 "name": "lvs_0", 00:29:01.255 "base_bdev": "Nvme0n1", 00:29:01.255 "total_data_clusters": 238234, 00:29:01.255 "free_clusters": 233114, 00:29:01.255 "block_size": 512, 00:29:01.255 "cluster_size": 4194304 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "uuid": "e7bb1bc8-4785-4911-98c1-4bae604d7bad", 00:29:01.255 "name": "lvs_n_0", 00:29:01.255 "base_bdev": "f163ee86-a3f3-4850-9505-36617611651a", 00:29:01.255 "total_data_clusters": 5114, 00:29:01.255 "free_clusters": 5114, 00:29:01.255 "block_size": 512, 00:29:01.255 "cluster_size": 4194304 00:29:01.255 } 00:29:01.255 ]' 00:29:01.255 20:34:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e7bb1bc8-4785-4911-98c1-4bae604d7bad") .free_clusters' 00:29:01.255 20:34:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:29:01.255 20:34:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e7bb1bc8-4785-4911-98c1-4bae604d7bad") .cluster_size' 00:29:01.514 20:34:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:01.514 20:34:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:29:01.514 20:34:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:29:01.514 20456 00:29:01.514 20:34:39 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:01.514 20:34:39 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e7bb1bc8-4785-4911-98c1-4bae604d7bad lbd_nest_0 20456 00:29:01.775 20:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=f485f4dd-1c24-41f2-b226-a3bdd1aaac48 00:29:01.775 20:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:01.775 20:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:01.775 20:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f485f4dd-1c24-41f2-b226-a3bdd1aaac48 00:29:02.034 20:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.291 20:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:02.291 20:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:02.292 20:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:02.292 20:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:02.292 20:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.550 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.761 Initializing NVMe Controllers 00:29:14.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:14.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:14.761 Initialization complete. Launching workers. 00:29:14.761 ======================================================== 00:29:14.761 Latency(us) 00:29:14.761 Device Information : IOPS MiB/s Average min max 00:29:14.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.99 0.02 20835.95 230.26 46887.30 00:29:14.761 ======================================================== 00:29:14.761 Total : 47.99 0.02 20835.95 230.26 46887.30 00:29:14.761 00:29:14.761 20:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:14.761 20:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:14.761 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.932 Initializing NVMe Controllers 00:29:22.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:22.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:22.932 Initialization complete. Launching workers. 00:29:22.932 ======================================================== 00:29:22.932 Latency(us) 00:29:22.932 Device Information : IOPS MiB/s Average min max 00:29:22.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.80 9.97 12548.92 5034.61 47896.93 00:29:22.932 ======================================================== 00:29:22.932 Total : 79.80 9.97 12548.92 5034.61 47896.93 00:29:22.932 00:29:23.192 20:35:01 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:23.192 20:35:01 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:23.192 20:35:01 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:23.192 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.400 Initializing NVMe Controllers 00:29:35.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:35.400 Initialization complete. Launching workers. 00:29:35.400 ======================================================== 00:29:35.400 Latency(us) 00:29:35.400 Device Information : IOPS MiB/s Average min max 00:29:35.400 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6963.60 3.40 4595.40 315.02 12037.99 00:29:35.400 ======================================================== 00:29:35.401 Total : 6963.60 3.40 4595.40 315.02 12037.99 00:29:35.401 00:29:35.401 20:35:11 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:35.401 20:35:11 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:35.401 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.372 Initializing NVMe Controllers 00:29:45.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:45.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:45.373 Initialization complete. Launching workers. 00:29:45.373 ======================================================== 00:29:45.373 Latency(us) 00:29:45.373 Device Information : IOPS MiB/s Average min max 00:29:45.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1846.53 230.82 17348.17 1410.36 55967.72 00:29:45.373 ======================================================== 00:29:45.373 Total : 1846.53 230.82 17348.17 1410.36 55967.72 00:29:45.373 00:29:45.373 20:35:22 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:45.373 20:35:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:45.373 20:35:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:45.373 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.347 Initializing NVMe Controllers 00:29:55.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.347 Controller IO queue size 128, less than required. 00:29:55.347 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:55.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:55.347 Initialization complete. Launching workers. 00:29:55.347 ======================================================== 00:29:55.347 Latency(us) 00:29:55.347 Device Information : IOPS MiB/s Average min max 00:29:55.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11929.10 5.82 10731.83 1731.56 24984.21 00:29:55.347 ======================================================== 00:29:55.347 Total : 11929.10 5.82 10731.83 1731.56 24984.21 00:29:55.347 00:29:55.347 20:35:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:55.347 20:35:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:55.347 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.353 Initializing NVMe Controllers 00:30:05.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:05.353 Controller IO queue size 128, less than required. 00:30:05.353 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:05.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:05.353 Initialization complete. Launching workers. 00:30:05.353 ======================================================== 00:30:05.353 Latency(us) 00:30:05.353 Device Information : IOPS MiB/s Average min max 00:30:05.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1216.10 152.01 105579.52 21365.24 222352.64 00:30:05.353 ======================================================== 00:30:05.353 Total : 1216.10 152.01 105579.52 21365.24 222352.64 00:30:05.353 00:30:05.353 20:35:42 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.353 20:35:43 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f485f4dd-1c24-41f2-b226-a3bdd1aaac48 00:30:05.611 20:35:43 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:05.869 20:35:44 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f163ee86-a3f3-4850-9505-36617611651a 00:30:06.127 20:35:44 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:06.385 rmmod nvme_tcp 00:30:06.385 rmmod nvme_fabrics 00:30:06.385 rmmod nvme_keyring 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 4149273 ']' 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 4149273 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 4149273 ']' 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 4149273 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4149273 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4149273' 00:30:06.385 killing process with pid 4149273 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 4149273 00:30:06.385 20:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 4149273 00:30:08.291 20:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:08.291 20:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:08.291 20:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:08.291 20:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:08.291 20:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:08.291 20:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.291 20:35:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.291 20:35:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.195 20:35:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:10.195 00:30:10.195 real 1m31.321s 00:30:10.195 user 5m38.305s 00:30:10.195 sys 0m15.034s 00:30:10.195 20:35:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:10.195 20:35:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:10.195 ************************************ 00:30:10.195 END TEST nvmf_perf 00:30:10.195 ************************************ 00:30:10.195 20:35:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:10.195 20:35:48 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:10.195 20:35:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:10.195 20:35:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:10.195 20:35:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:10.195 ************************************ 00:30:10.195 START TEST nvmf_fio_host 00:30:10.195 ************************************ 00:30:10.195 20:35:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:10.195 * Looking for test storage... 00:30:10.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:10.195 20:35:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.195 20:35:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.195 20:35:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.195 20:35:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.195 20:35:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.195 20:35:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.195 20:35:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.195 20:35:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:10.195 20:35:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:10.196 20:35:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:12.104 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:12.104 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:12.105 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:12.105 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:12.105 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.105 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:12.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:30:12.364 00:30:12.364 --- 10.0.0.2 ping statistics --- 00:30:12.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.364 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:30:12.364 00:30:12.364 --- 10.0.0.1 ping statistics --- 00:30:12.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.364 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4161284 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4161284 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 4161284 ']' 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:12.364 20:35:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.364 [2024-07-15 20:35:50.725187] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:30:12.364 [2024-07-15 20:35:50.725299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.364 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.364 [2024-07-15 20:35:50.793727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:12.364 [2024-07-15 20:35:50.884560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.364 [2024-07-15 20:35:50.884623] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.364 [2024-07-15 20:35:50.884648] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.364 [2024-07-15 20:35:50.884662] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.364 [2024-07-15 20:35:50.884673] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.364 [2024-07-15 20:35:50.884755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.364 [2024-07-15 20:35:50.884809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.364 [2024-07-15 20:35:50.884926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:12.364 [2024-07-15 20:35:50.884930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.622 20:35:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:12.622 20:35:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:30:12.622 20:35:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:12.880 [2024-07-15 20:35:51.240207] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.880 20:35:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:12.880 20:35:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:12.880 20:35:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.880 20:35:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:13.138 Malloc1 00:30:13.138 20:35:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:13.397 20:35:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:13.655 20:35:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.913 [2024-07-15 20:35:52.245937] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.913 20:35:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:14.170 20:35:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:14.170 20:35:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.170 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.170 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:14.170 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:14.170 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:14.170 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.170 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:14.171 20:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.428 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:14.428 fio-3.35 00:30:14.428 Starting 1 thread 00:30:14.428 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.952 00:30:16.952 test: (groupid=0, jobs=1): err= 0: pid=4161645: Mon Jul 15 20:35:55 2024 00:30:16.952 read: IOPS=9142, BW=35.7MiB/s (37.4MB/s)(71.7MiB/2007msec) 00:30:16.952 slat (nsec): min=1985, max=110959, avg=2571.53, stdev=1430.02 00:30:16.952 clat (usec): min=3189, max=12910, avg=7719.86, stdev=564.14 00:30:16.952 lat (usec): min=3212, max=12913, avg=7722.43, stdev=564.08 00:30:16.952 clat percentiles (usec): 00:30:16.952 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:30:16.952 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:30:16.952 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8356], 95.00th=[ 8586], 00:30:16.952 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11994], 99.95th=[12256], 00:30:16.952 | 99.99th=[12911] 00:30:16.952 bw ( KiB/s): min=35192, max=37232, per=99.95%, avg=36552.00, stdev=920.87, samples=4 00:30:16.952 iops : min= 8798, max= 9308, avg=9138.00, stdev=230.22, samples=4 00:30:16.952 write: IOPS=9149, BW=35.7MiB/s (37.5MB/s)(71.7MiB/2007msec); 0 zone resets 00:30:16.952 slat (usec): min=2, max=100, avg= 2.68, stdev= 1.13 00:30:16.952 clat (usec): min=1289, max=12243, avg=6176.23, stdev=504.10 00:30:16.952 lat (usec): min=1295, max=12246, avg=6178.91, stdev=504.05 00:30:16.952 clat percentiles (usec): 00:30:16.952 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 5800], 00:30:16.952 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6259], 00:30:16.952 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6915], 00:30:16.952 | 99.00th=[ 7177], 99.50th=[ 7373], 99.90th=[10814], 99.95th=[11469], 00:30:16.952 | 99.99th=[12256] 00:30:16.952 bw ( KiB/s): min=35928, max=36952, per=100.00%, avg=36620.00, stdev=478.42, samples=4 00:30:16.952 iops : min= 8982, max= 9238, avg=9155.00, stdev=119.60, samples=4 00:30:16.952 lat (msec) : 2=0.02%, 4=0.12%, 10=99.72%, 20=0.14% 00:30:16.952 cpu : usr=55.08%, sys=37.49%, ctx=64, majf=0, minf=32 00:30:16.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:16.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:16.953 issued rwts: total=18349,18364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:16.953 00:30:16.953 Run status group 0 (all jobs): 00:30:16.953 READ: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=71.7MiB (75.2MB), run=2007-2007msec 00:30:16.953 WRITE: bw=35.7MiB/s (37.5MB/s), 35.7MiB/s-35.7MiB/s (37.5MB/s-37.5MB/s), io=71.7MiB (75.2MB), run=2007-2007msec 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:16.953 20:35:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:16.953 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:16.953 fio-3.35 00:30:16.953 Starting 1 thread 00:30:16.953 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.529 00:30:19.529 test: (groupid=0, jobs=1): err= 0: pid=4162004: Mon Jul 15 20:35:57 2024 00:30:19.529 read: IOPS=7907, BW=124MiB/s (130MB/s)(248MiB/2006msec) 00:30:19.529 slat (nsec): min=2840, max=93428, avg=3639.04, stdev=1510.95 00:30:19.529 clat (usec): min=2911, max=20824, avg=9697.56, stdev=2345.04 00:30:19.529 lat (usec): min=2915, max=20827, avg=9701.20, stdev=2345.08 00:30:19.529 clat percentiles (usec): 00:30:19.529 | 1.00th=[ 4883], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 7701], 00:30:19.529 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10028], 00:30:19.529 | 70.00th=[10683], 80.00th=[11731], 90.00th=[12911], 95.00th=[13960], 00:30:19.529 | 99.00th=[15401], 99.50th=[15926], 99.90th=[16712], 99.95th=[18482], 00:30:19.529 | 99.99th=[19792] 00:30:19.529 bw ( KiB/s): min=55168, max=69056, per=49.70%, avg=62888.00, stdev=6036.47, samples=4 00:30:19.529 iops : min= 3448, max= 4316, avg=3930.50, stdev=377.28, samples=4 00:30:19.529 write: IOPS=4481, BW=70.0MiB/s (73.4MB/s)(129MiB/1837msec); 0 zone resets 00:30:19.529 slat (usec): min=30, max=145, avg=33.16, stdev= 4.44 00:30:19.529 clat (usec): min=6592, max=21802, avg=11705.42, stdev=2168.59 00:30:19.529 lat (usec): min=6624, max=21833, avg=11738.58, stdev=2168.68 00:30:19.529 clat percentiles (usec): 00:30:19.529 | 1.00th=[ 7439], 5.00th=[ 8291], 10.00th=[ 8979], 20.00th=[ 9765], 00:30:19.529 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11600], 60.00th=[12125], 00:30:19.529 | 70.00th=[12911], 80.00th=[13698], 90.00th=[14484], 95.00th=[15533], 00:30:19.529 | 99.00th=[16581], 99.50th=[17171], 99.90th=[21365], 99.95th=[21627], 00:30:19.529 | 99.99th=[21890] 00:30:19.529 bw ( KiB/s): min=56288, max=72224, per=91.49%, avg=65608.00, stdev=6982.72, samples=4 00:30:19.529 iops : min= 3518, max= 4514, avg=4100.50, stdev=436.42, samples=4 00:30:19.529 lat (msec) : 4=0.13%, 10=46.87%, 20=52.93%, 50=0.06% 00:30:19.529 cpu : usr=72.77%, sys=22.99%, ctx=32, majf=0, minf=56 00:30:19.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:19.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:19.530 issued rwts: total=15863,8233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:19.530 00:30:19.530 Run status group 0 (all jobs): 00:30:19.530 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=248MiB (260MB), run=2006-2006msec 00:30:19.530 WRITE: bw=70.0MiB/s (73.4MB/s), 70.0MiB/s-70.0MiB/s (73.4MB/s-73.4MB/s), io=129MiB (135MB), run=1837-1837msec 00:30:19.530 20:35:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:19.530 20:35:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:19.530 20:35:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:19.530 20:35:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:19.530 20:35:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:19.530 20:35:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:30:19.530 20:35:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:19.530 20:35:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:19.530 20:35:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:19.800 20:35:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:19.800 20:35:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:30:19.800 20:35:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:23.079 Nvme0n1 00:30:23.079 20:36:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:25.600 20:36:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=994f6a21-fe88-4d5a-b37e-233f389bc564 00:30:25.600 20:36:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 994f6a21-fe88-4d5a-b37e-233f389bc564 00:30:25.600 20:36:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=994f6a21-fe88-4d5a-b37e-233f389bc564 00:30:25.600 20:36:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:25.600 20:36:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:25.600 20:36:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:25.600 20:36:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:25.856 20:36:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:25.856 { 00:30:25.856 "uuid": "994f6a21-fe88-4d5a-b37e-233f389bc564", 00:30:25.856 "name": "lvs_0", 00:30:25.856 "base_bdev": "Nvme0n1", 00:30:25.856 "total_data_clusters": 930, 00:30:25.856 "free_clusters": 930, 00:30:25.856 "block_size": 512, 00:30:25.856 "cluster_size": 1073741824 00:30:25.856 } 00:30:25.856 ]' 00:30:25.856 20:36:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="994f6a21-fe88-4d5a-b37e-233f389bc564") .free_clusters' 00:30:25.856 20:36:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:30:25.856 20:36:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="994f6a21-fe88-4d5a-b37e-233f389bc564") .cluster_size' 00:30:26.111 20:36:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:30:26.111 20:36:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:30:26.111 20:36:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:30:26.111 952320 00:30:26.111 20:36:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:26.366 be4967dd-6ff6-4468-82cb-df297795ca39 00:30:26.366 20:36:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:26.623 20:36:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:26.881 20:36:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:27.137 20:36:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:27.394 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:27.394 fio-3.35 00:30:27.394 Starting 1 thread 00:30:27.394 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.914 00:30:29.914 test: (groupid=0, jobs=1): err= 0: pid=4163487: Mon Jul 15 20:36:08 2024 00:30:29.914 read: IOPS=6053, BW=23.6MiB/s (24.8MB/s)(47.5MiB/2007msec) 00:30:29.914 slat (usec): min=2, max=177, avg= 3.01, stdev= 2.60 00:30:29.914 clat (usec): min=935, max=171640, avg=11642.02, stdev=11612.40 00:30:29.914 lat (usec): min=938, max=171684, avg=11645.03, stdev=11612.73 00:30:29.914 clat percentiles (msec): 00:30:29.914 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:29.914 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:30:29.914 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:29.914 | 99.00th=[ 13], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:30:29.914 | 99.99th=[ 171] 00:30:29.914 bw ( KiB/s): min=16880, max=26800, per=99.74%, avg=24152.00, stdev=4850.87, samples=4 00:30:29.914 iops : min= 4220, max= 6700, avg=6038.00, stdev=1212.72, samples=4 00:30:29.914 write: IOPS=6035, BW=23.6MiB/s (24.7MB/s)(47.3MiB/2007msec); 0 zone resets 00:30:29.914 slat (usec): min=2, max=107, avg= 3.10, stdev= 1.97 00:30:29.914 clat (usec): min=372, max=169711, avg=9323.65, stdev=10920.28 00:30:29.914 lat (usec): min=376, max=169717, avg=9326.75, stdev=10920.59 00:30:29.914 clat percentiles (msec): 00:30:29.914 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:29.914 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:29.914 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:29.914 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:30:29.914 | 99.99th=[ 169] 00:30:29.914 bw ( KiB/s): min=17856, max=26360, per=99.91%, avg=24122.00, stdev=4179.53, samples=4 00:30:29.914 iops : min= 4464, max= 6590, avg=6030.50, stdev=1044.88, samples=4 00:30:29.914 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:29.914 lat (msec) : 2=0.02%, 4=0.14%, 10=56.93%, 20=42.35%, 250=0.53% 00:30:29.914 cpu : usr=52.09%, sys=42.27%, ctx=86, majf=0, minf=32 00:30:29.914 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:29.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:29.914 issued rwts: total=12150,12114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.914 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:29.914 00:30:29.914 Run status group 0 (all jobs): 00:30:29.914 READ: bw=23.6MiB/s (24.8MB/s), 23.6MiB/s-23.6MiB/s (24.8MB/s-24.8MB/s), io=47.5MiB (49.8MB), run=2007-2007msec 00:30:29.914 WRITE: bw=23.6MiB/s (24.7MB/s), 23.6MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=47.3MiB (49.6MB), run=2007-2007msec 00:30:29.914 20:36:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:30.172 20:36:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:31.100 20:36:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=4debb9d9-5f96-4fb1-83ae-9f90b379ce5f 00:30:31.100 20:36:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 4debb9d9-5f96-4fb1-83ae-9f90b379ce5f 00:30:31.100 20:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=4debb9d9-5f96-4fb1-83ae-9f90b379ce5f 00:30:31.100 20:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:31.100 20:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:31.100 20:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:31.100 20:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:31.357 20:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:31.357 { 00:30:31.357 "uuid": "994f6a21-fe88-4d5a-b37e-233f389bc564", 00:30:31.357 "name": "lvs_0", 00:30:31.357 "base_bdev": "Nvme0n1", 00:30:31.357 "total_data_clusters": 930, 00:30:31.357 "free_clusters": 0, 00:30:31.357 "block_size": 512, 00:30:31.357 "cluster_size": 1073741824 00:30:31.357 }, 00:30:31.357 { 00:30:31.357 "uuid": "4debb9d9-5f96-4fb1-83ae-9f90b379ce5f", 00:30:31.357 "name": "lvs_n_0", 00:30:31.357 "base_bdev": "be4967dd-6ff6-4468-82cb-df297795ca39", 00:30:31.357 "total_data_clusters": 237847, 00:30:31.357 "free_clusters": 237847, 00:30:31.357 "block_size": 512, 00:30:31.357 "cluster_size": 4194304 00:30:31.357 } 00:30:31.357 ]' 00:30:31.357 20:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="4debb9d9-5f96-4fb1-83ae-9f90b379ce5f") .free_clusters' 00:30:31.614 20:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:30:31.614 20:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="4debb9d9-5f96-4fb1-83ae-9f90b379ce5f") .cluster_size' 00:30:31.614 20:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:31.614 20:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:30:31.614 20:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:30:31.614 951388 00:30:31.614 20:36:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:32.176 93a0827f-b0ee-42a5-9a85-3c691ceeb30e 00:30:32.176 20:36:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:32.432 20:36:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:32.689 20:36:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:32.946 20:36:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:33.202 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:33.202 fio-3.35 00:30:33.202 Starting 1 thread 00:30:33.202 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.726 00:30:35.726 test: (groupid=0, jobs=1): err= 0: pid=4164730: Mon Jul 15 20:36:13 2024 00:30:35.726 read: IOPS=5735, BW=22.4MiB/s (23.5MB/s)(45.0MiB/2010msec) 00:30:35.726 slat (usec): min=2, max=179, avg= 2.83, stdev= 2.56 00:30:35.726 clat (usec): min=4504, max=21553, avg=12316.35, stdev=1013.50 00:30:35.726 lat (usec): min=4509, max=21556, avg=12319.18, stdev=1013.35 00:30:35.726 clat percentiles (usec): 00:30:35.726 | 1.00th=[10028], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:30:35.726 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:30:35.726 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13566], 95.00th=[13829], 00:30:35.726 | 99.00th=[14615], 99.50th=[14877], 99.90th=[17695], 99.95th=[20055], 00:30:35.726 | 99.99th=[21365] 00:30:35.726 bw ( KiB/s): min=21584, max=23480, per=99.99%, avg=22940.00, stdev=907.13, samples=4 00:30:35.726 iops : min= 5396, max= 5870, avg=5735.00, stdev=226.78, samples=4 00:30:35.726 write: IOPS=5726, BW=22.4MiB/s (23.5MB/s)(45.0MiB/2010msec); 0 zone resets 00:30:35.726 slat (usec): min=2, max=164, avg= 2.94, stdev= 2.19 00:30:35.726 clat (usec): min=2180, max=19152, avg=9810.83, stdev=956.53 00:30:35.726 lat (usec): min=2188, max=19169, avg=9813.77, stdev=956.47 00:30:35.726 clat percentiles (usec): 00:30:35.726 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:30:35.726 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:30:35.726 | 70.00th=[10159], 80.00th=[10552], 90.00th=[10814], 95.00th=[11207], 00:30:35.726 | 99.00th=[11863], 99.50th=[12649], 99.90th=[17957], 99.95th=[17957], 00:30:35.726 | 99.99th=[19006] 00:30:35.726 bw ( KiB/s): min=22616, max=23112, per=99.91%, avg=22886.00, stdev=227.81, samples=4 00:30:35.726 iops : min= 5654, max= 5778, avg=5721.50, stdev=56.95, samples=4 00:30:35.726 lat (msec) : 4=0.05%, 10=30.51%, 20=69.41%, 50=0.03% 00:30:35.726 cpu : usr=54.55%, sys=41.02%, ctx=66, majf=0, minf=32 00:30:35.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:35.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:35.726 issued rwts: total=11529,11511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:35.726 00:30:35.726 Run status group 0 (all jobs): 00:30:35.726 READ: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.0MiB (47.2MB), run=2010-2010msec 00:30:35.726 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.0MiB (47.1MB), run=2010-2010msec 00:30:35.726 20:36:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:35.726 20:36:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:35.726 20:36:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:39.934 20:36:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:39.934 20:36:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:43.213 20:36:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:43.213 20:36:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:45.111 rmmod nvme_tcp 00:30:45.111 rmmod nvme_fabrics 00:30:45.111 rmmod nvme_keyring 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 4161284 ']' 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 4161284 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 4161284 ']' 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 4161284 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4161284 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4161284' 00:30:45.111 killing process with pid 4161284 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 4161284 00:30:45.111 20:36:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 4161284 00:30:45.370 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:45.370 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:45.370 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:45.370 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:45.370 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:45.370 20:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.370 20:36:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:45.370 20:36:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.275 20:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:47.275 00:30:47.275 real 0m37.199s 00:30:47.275 user 2m21.651s 00:30:47.275 sys 0m7.549s 00:30:47.275 20:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:47.275 20:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.275 ************************************ 00:30:47.275 END TEST nvmf_fio_host 00:30:47.275 ************************************ 00:30:47.275 20:36:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:47.275 20:36:25 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:47.275 20:36:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:47.275 20:36:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.275 20:36:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:47.534 ************************************ 00:30:47.534 START TEST nvmf_failover 00:30:47.534 ************************************ 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:47.534 * Looking for test storage... 00:30:47.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:47.534 20:36:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:49.433 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:49.433 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:49.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:49.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:49.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:30:49.433 00:30:49.433 --- 10.0.0.2 ping statistics --- 00:30:49.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.433 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:30:49.433 00:30:49.433 --- 10.0.0.1 ping statistics --- 00:30:49.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.433 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:49.433 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:49.691 20:36:27 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:49.691 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:49.691 20:36:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:49.691 20:36:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:49.691 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=4167979 00:30:49.691 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:49.691 20:36:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 4167979 00:30:49.691 20:36:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 4167979 ']' 00:30:49.691 20:36:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.691 20:36:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:49.691 20:36:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.691 20:36:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:49.691 20:36:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:49.691 [2024-07-15 20:36:28.015513] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:30:49.691 [2024-07-15 20:36:28.015589] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.691 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.691 [2024-07-15 20:36:28.088126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:49.691 [2024-07-15 20:36:28.178476] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.691 [2024-07-15 20:36:28.178525] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.691 [2024-07-15 20:36:28.178552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.691 [2024-07-15 20:36:28.178565] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.691 [2024-07-15 20:36:28.178577] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.691 [2024-07-15 20:36:28.178671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.691 [2024-07-15 20:36:28.178789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:49.691 [2024-07-15 20:36:28.178793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.949 20:36:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:49.949 20:36:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:49.949 20:36:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:49.949 20:36:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:49.949 20:36:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:49.949 20:36:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.949 20:36:28 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:50.205 [2024-07-15 20:36:28.550411] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.205 20:36:28 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:50.462 Malloc0 00:30:50.462 20:36:28 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:50.748 20:36:29 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:51.005 20:36:29 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.262 [2024-07-15 20:36:29.648108] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.262 20:36:29 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:51.528 [2024-07-15 20:36:29.888745] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:51.528 20:36:29 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:51.790 [2024-07-15 20:36:30.145801] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:51.790 20:36:30 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4168264 00:30:51.790 20:36:30 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:51.790 20:36:30 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:51.790 20:36:30 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4168264 /var/tmp/bdevperf.sock 00:30:51.790 20:36:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 4168264 ']' 00:30:51.790 20:36:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:51.790 20:36:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:51.790 20:36:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:51.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:51.790 20:36:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:51.790 20:36:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:52.047 20:36:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:52.047 20:36:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:52.047 20:36:30 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:52.304 NVMe0n1 00:30:52.562 20:36:30 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:52.819 00:30:52.819 20:36:31 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4168397 00:30:52.819 20:36:31 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:52.819 20:36:31 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:53.753 20:36:32 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.012 [2024-07-15 20:36:32.528307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.012 [2024-07-15 20:36:32.528715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67270 is same with the state(5) to be set 00:30:54.299 20:36:32 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:57.581 20:36:35 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:57.581 00:30:57.581 20:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:58.147 [2024-07-15 20:36:36.378005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.147 [2024-07-15 20:36:36.378060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.147 [2024-07-15 20:36:36.378076] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.147 [2024-07-15 20:36:36.378089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.147 [2024-07-15 20:36:36.378101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.378999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.379011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.379023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.379035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.379047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.379059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.379071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.379083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.379110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.379122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.379134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.379146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 [2024-07-15 20:36:36.379169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68090 is same with the state(5) to be set 00:30:58.148 20:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:01.423 20:36:39 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.423 [2024-07-15 20:36:39.645079] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.423 20:36:39 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:02.357 20:36:40 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:02.614 20:36:40 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 4168397 00:31:09.174 0 00:31:09.174 20:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 4168264 00:31:09.174 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 4168264 ']' 00:31:09.174 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 4168264 00:31:09.174 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:09.174 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:09.174 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4168264 00:31:09.174 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:09.174 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:09.174 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4168264' 00:31:09.174 killing process with pid 4168264 00:31:09.174 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 4168264 00:31:09.174 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 4168264 00:31:09.174 20:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:09.174 [2024-07-15 20:36:30.209650] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:31:09.174 [2024-07-15 20:36:30.209735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168264 ] 00:31:09.174 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.174 [2024-07-15 20:36:30.272130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.174 [2024-07-15 20:36:30.359356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.174 Running I/O for 15 seconds... 00:31:09.174 [2024-07-15 20:36:32.529726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.529767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.529794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.529810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.529826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.529842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.529856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.529905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.529924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.529938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.529954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.529967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.529982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.529995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.174 [2024-07-15 20:36:32.530786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.174 [2024-07-15 20:36:32.530798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.530812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.175 [2024-07-15 20:36:32.530825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.530843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.530867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.530902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.530917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.530932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.530946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.530961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.530973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.530988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.531983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.531998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.532011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.532026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.175 [2024-07-15 20:36:32.532039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.175 [2024-07-15 20:36:32.532054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.176 [2024-07-15 20:36:32.532067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.176 [2024-07-15 20:36:32.532095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.176 [2024-07-15 20:36:32.532122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.176 [2024-07-15 20:36:32.532150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.176 [2024-07-15 20:36:32.532185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.176 [2024-07-15 20:36:32.532227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.176 [2024-07-15 20:36:32.532262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80408 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80416 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80424 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80432 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80440 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80448 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80456 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80464 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80472 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80480 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80488 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80496 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79768 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79776 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.532954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.532967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.532977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.532988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79784 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.533000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.533013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.533023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.533034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79792 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.533046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.533059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.533070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.533081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79800 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.533093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.533106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.533119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.533131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79808 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.533143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.533166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.533177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.533202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79816 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.533214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.533227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.533237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.533247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79824 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.533274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.533287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.533298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.176 [2024-07-15 20:36:32.533309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79832 len:8 PRP1 0x0 PRP2 0x0 00:31:09.176 [2024-07-15 20:36:32.533321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.176 [2024-07-15 20:36:32.533334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.176 [2024-07-15 20:36:32.533345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.533355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79840 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.533367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.533380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.533390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.533401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79848 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.533413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.533432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.533443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.533454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79856 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.533466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.533479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.533489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.533500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79864 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.533512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.533532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.533543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.533554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79872 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.533566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.533578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.533589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.533599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79880 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.533612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.533624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.533635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.533645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79888 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.533657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.533670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.533681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.533691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79896 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.533703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.533716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.533726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.533737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79904 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.533749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.533761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.533771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.533782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79912 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.533794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.533808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.533818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.533829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79920 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.533841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.533854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.533867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.533883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79928 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.533917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.533932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.533943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.533954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79936 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.533967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.533980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.533991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.534002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79944 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.534015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.534028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.534039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.534050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79952 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.534062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.534075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.534092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.534103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79960 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.534116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.534129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.534139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.534150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79968 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.534172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.534200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.534212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.534230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79976 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.534242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.534254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.534265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.534276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79984 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.534287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.534300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.177 [2024-07-15 20:36:32.534310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.177 [2024-07-15 20:36:32.534324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79992 len:8 PRP1 0x0 PRP2 0x0 00:31:09.177 [2024-07-15 20:36:32.534337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.177 [2024-07-15 20:36:32.534350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.178 [2024-07-15 20:36:32.534360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.178 [2024-07-15 20:36:32.534371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80000 len:8 PRP1 0x0 PRP2 0x0 00:31:09.178 [2024-07-15 20:36:32.534383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:32.534395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.178 [2024-07-15 20:36:32.534406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.178 [2024-07-15 20:36:32.534417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80008 len:8 PRP1 0x0 PRP2 0x0 00:31:09.178 [2024-07-15 20:36:32.534428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:32.534441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.178 [2024-07-15 20:36:32.534452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.178 [2024-07-15 20:36:32.534463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80016 len:8 PRP1 0x0 PRP2 0x0 00:31:09.178 [2024-07-15 20:36:32.534475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:32.534531] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cd8760 was disconnected and freed. reset controller. 00:31:09.178 [2024-07-15 20:36:32.534553] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:09.178 [2024-07-15 20:36:32.534601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.178 [2024-07-15 20:36:32.534619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:32.534635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.178 [2024-07-15 20:36:32.534648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:32.534662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.178 [2024-07-15 20:36:32.534675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:32.534689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.178 [2024-07-15 20:36:32.534702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:32.534715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.178 [2024-07-15 20:36:32.538057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.178 [2024-07-15 20:36:32.538095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4830 (9): Bad file descriptor 00:31:09.178 [2024-07-15 20:36:32.569476] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:09.178 [2024-07-15 20:36:36.380516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.380593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.380625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.380653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.380681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.380709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.380737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.380766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.380794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.380824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.380853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.380912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.380943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.380976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.380991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.381020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.381049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.381077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.178 [2024-07-15 20:36:36.381107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.381984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.381998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.382014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.382027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.382042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.382055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.178 [2024-07-15 20:36:36.382071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.178 [2024-07-15 20:36:36.382085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.382984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.382999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.179 [2024-07-15 20:36:36.383957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.383988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97208 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.179 [2024-07-15 20:36:36.384755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.179 [2024-07-15 20:36:36.384766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:31:09.179 [2024-07-15 20:36:36.384778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384831] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ccad10 was disconnected and freed. reset controller. 00:31:09.179 [2024-07-15 20:36:36.384849] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:09.179 [2024-07-15 20:36:36.384913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.179 [2024-07-15 20:36:36.384940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.179 [2024-07-15 20:36:36.384970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.384984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.179 [2024-07-15 20:36:36.384997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.385011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.179 [2024-07-15 20:36:36.385024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:36.385037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.179 [2024-07-15 20:36:36.385076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4830 (9): Bad file descriptor 00:31:09.179 [2024-07-15 20:36:36.388355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.179 [2024-07-15 20:36:36.548749] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:09.179 [2024-07-15 20:36:40.896106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.179 [2024-07-15 20:36:40.896165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:40.896203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.179 [2024-07-15 20:36:40.896219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:40.896236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.179 [2024-07-15 20:36:40.896250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:40.896266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.179 [2024-07-15 20:36:40.896279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.179 [2024-07-15 20:36:40.896295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.179 [2024-07-15 20:36:40.896309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.180 [2024-07-15 20:36:40.896340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.896984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.896998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.180 [2024-07-15 20:36:40.897465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.180 [2024-07-15 20:36:40.897479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.897978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.897992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.181 [2024-07-15 20:36:40.898745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.898791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62776 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.898804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.898833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.898844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62784 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.898856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.898902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.898915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62792 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.898928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.898953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.898964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62800 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.898976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.898989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.898999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62808 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62816 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62824 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62832 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62840 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62848 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62856 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62864 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62872 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62880 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62888 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62896 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62904 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62912 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62920 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62928 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62936 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62944 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62952 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62960 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.899962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.899975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.899986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.899997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62968 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.900010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.900024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.900035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.900046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62976 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.900059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.900072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.900083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.900095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62984 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.900107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.900120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.900131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.900143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62992 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.900156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.900170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.900181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.900207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63000 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.900219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.900232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.900243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.900258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63008 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.900271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.900284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.900295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.900306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63016 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.900318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.900331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.900342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.900353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63024 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.900365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.900378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.900388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.900399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63032 len:8 PRP1 0x0 PRP2 0x0 00:31:09.181 [2024-07-15 20:36:40.900411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.181 [2024-07-15 20:36:40.900424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.181 [2024-07-15 20:36:40.900435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.181 [2024-07-15 20:36:40.900446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63040 len:8 PRP1 0x0 PRP2 0x0 00:31:09.182 [2024-07-15 20:36:40.900457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.182 [2024-07-15 20:36:40.900470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.182 [2024-07-15 20:36:40.900480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.182 [2024-07-15 20:36:40.900491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63048 len:8 PRP1 0x0 PRP2 0x0 00:31:09.182 [2024-07-15 20:36:40.900503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.182 [2024-07-15 20:36:40.900516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.182 [2024-07-15 20:36:40.900526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.182 [2024-07-15 20:36:40.900537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62080 len:8 PRP1 0x0 PRP2 0x0 00:31:09.182 [2024-07-15 20:36:40.900549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.182 [2024-07-15 20:36:40.900562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.182 [2024-07-15 20:36:40.900572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.182 [2024-07-15 20:36:40.900583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62088 len:8 PRP1 0x0 PRP2 0x0 00:31:09.182 [2024-07-15 20:36:40.900595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.182 [2024-07-15 20:36:40.900611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.182 [2024-07-15 20:36:40.900622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.182 [2024-07-15 20:36:40.900633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62096 len:8 PRP1 0x0 PRP2 0x0 00:31:09.182 [2024-07-15 20:36:40.900646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.182 [2024-07-15 20:36:40.900659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.182 [2024-07-15 20:36:40.900669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.182 [2024-07-15 20:36:40.900680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62104 len:8 PRP1 0x0 PRP2 0x0 00:31:09.182 [2024-07-15 20:36:40.900692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.182 [2024-07-15 20:36:40.900705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.182 [2024-07-15 20:36:40.900722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.182 [2024-07-15 20:36:40.900733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62112 len:8 PRP1 0x0 PRP2 0x0 00:31:09.182 [2024-07-15 20:36:40.900745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.182 [2024-07-15 20:36:40.900758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.182 [2024-07-15 20:36:40.900768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.182 [2024-07-15 20:36:40.900779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62120 len:8 PRP1 0x0 PRP2 0x0 00:31:09.182 [2024-07-15 20:36:40.900791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.182 [2024-07-15 20:36:40.900804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:09.182 [2024-07-15 20:36:40.900814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:09.182 [2024-07-15 20:36:40.900825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62128 len:8 PRP1 0x0 PRP2 0x0 00:31:09.182 [2024-07-15 20:36:40.900837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.182 [2024-07-15 20:36:40.900913] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d780f0 was disconnected and freed. reset controller. 00:31:09.182 [2024-07-15 20:36:40.900933] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:09.182 [2024-07-15 20:36:40.900967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.182 [2024-07-15 20:36:40.900985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.182 [2024-07-15 20:36:40.901001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.182 [2024-07-15 20:36:40.901014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.182 [2024-07-15 20:36:40.901028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.182 [2024-07-15 20:36:40.901041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.182 [2024-07-15 20:36:40.901055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.182 [2024-07-15 20:36:40.901069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.182 [2024-07-15 20:36:40.901086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.182 [2024-07-15 20:36:40.901138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4830 (9): Bad file descriptor 00:31:09.182 [2024-07-15 20:36:40.904395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.182 [2024-07-15 20:36:41.069768] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:09.182 00:31:09.182 Latency(us) 00:31:09.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.182 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:09.182 Verification LBA range: start 0x0 length 0x4000 00:31:09.182 NVMe0n1 : 15.01 8593.44 33.57 931.09 0.00 13409.05 825.27 17185.00 00:31:09.182 =================================================================================================================== 00:31:09.182 Total : 8593.44 33.57 931.09 0.00 13409.05 825.27 17185.00 00:31:09.182 Received shutdown signal, test time was about 15.000000 seconds 00:31:09.182 00:31:09.182 Latency(us) 00:31:09.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.182 =================================================================================================================== 00:31:09.182 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4170241 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4170241 /var/tmp/bdevperf.sock 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 4170241 ']' 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:09.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:09.182 20:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:09.182 [2024-07-15 20:36:47.155530] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:09.182 20:36:47 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:09.182 [2024-07-15 20:36:47.400218] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:09.182 20:36:47 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:09.439 NVMe0n1 00:31:09.439 20:36:47 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:09.696 00:31:09.696 20:36:48 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:10.260 00:31:10.260 20:36:48 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:10.260 20:36:48 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:10.260 20:36:48 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:10.517 20:36:49 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:13.796 20:36:52 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:13.796 20:36:52 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:13.796 20:36:52 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4170901 00:31:13.796 20:36:52 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:13.796 20:36:52 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 4170901 00:31:15.169 0 00:31:15.169 20:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:15.169 [2024-07-15 20:36:46.694042] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:31:15.169 [2024-07-15 20:36:46.694141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170241 ] 00:31:15.169 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.169 [2024-07-15 20:36:46.753514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.169 [2024-07-15 20:36:46.836533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.169 [2024-07-15 20:36:48.996088] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:15.169 [2024-07-15 20:36:48.996212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.169 [2024-07-15 20:36:48.996236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.169 [2024-07-15 20:36:48.996272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.169 [2024-07-15 20:36:48.996287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.169 [2024-07-15 20:36:48.996301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.169 [2024-07-15 20:36:48.996314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.169 [2024-07-15 20:36:48.996328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.169 [2024-07-15 20:36:48.996342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.169 [2024-07-15 20:36:48.996364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.169 [2024-07-15 20:36:48.996422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.169 [2024-07-15 20:36:48.996461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c0830 (9): Bad file descriptor 00:31:15.169 [2024-07-15 20:36:49.088100] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:15.169 Running I/O for 1 seconds... 00:31:15.169 00:31:15.169 Latency(us) 00:31:15.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:15.169 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:15.169 Verification LBA range: start 0x0 length 0x4000 00:31:15.169 NVMe0n1 : 1.01 8710.44 34.03 0.00 0.00 14623.74 1092.27 14660.65 00:31:15.169 =================================================================================================================== 00:31:15.169 Total : 8710.44 34.03 0.00 0.00 14623.74 1092.27 14660.65 00:31:15.169 20:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:15.169 20:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:15.169 20:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:15.473 20:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:15.473 20:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:15.731 20:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:15.988 20:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:19.263 20:36:57 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:19.264 20:36:57 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:19.264 20:36:57 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 4170241 00:31:19.264 20:36:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 4170241 ']' 00:31:19.264 20:36:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 4170241 00:31:19.264 20:36:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:19.264 20:36:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:19.264 20:36:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4170241 00:31:19.264 20:36:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:19.264 20:36:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:19.264 20:36:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4170241' 00:31:19.264 killing process with pid 4170241 00:31:19.264 20:36:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 4170241 00:31:19.264 20:36:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 4170241 00:31:19.521 20:36:57 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:19.521 20:36:57 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.779 rmmod nvme_tcp 00:31:19.779 rmmod nvme_fabrics 00:31:19.779 rmmod nvme_keyring 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 4167979 ']' 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 4167979 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 4167979 ']' 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 4167979 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4167979 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4167979' 00:31:19.779 killing process with pid 4167979 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 4167979 00:31:19.779 20:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 4167979 00:31:20.037 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:20.037 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:20.037 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:20.037 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:20.037 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:20.037 20:36:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.037 20:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:20.037 20:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.572 20:37:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:22.572 00:31:22.572 real 0m34.750s 00:31:22.572 user 2m2.406s 00:31:22.572 sys 0m5.803s 00:31:22.572 20:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:22.572 20:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:22.572 ************************************ 00:31:22.572 END TEST nvmf_failover 00:31:22.572 ************************************ 00:31:22.572 20:37:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:22.572 20:37:00 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:22.572 20:37:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:22.572 20:37:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:22.572 20:37:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.572 ************************************ 00:31:22.572 START TEST nvmf_host_discovery 00:31:22.572 ************************************ 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:22.572 * Looking for test storage... 00:31:22.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.572 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:22.573 20:37:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:24.473 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.473 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:24.474 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:24.474 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:24.474 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:24.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:31:24.474 00:31:24.474 --- 10.0.0.2 ping statistics --- 00:31:24.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.474 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:24.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:31:24.474 00:31:24.474 --- 10.0.0.1 ping statistics --- 00:31:24.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.474 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=4173507 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 4173507 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 4173507 ']' 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:24.474 20:37:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.474 [2024-07-15 20:37:02.787271] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:31:24.474 [2024-07-15 20:37:02.787341] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.474 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.474 [2024-07-15 20:37:02.850256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.474 [2024-07-15 20:37:02.934111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.474 [2024-07-15 20:37:02.934182] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.474 [2024-07-15 20:37:02.934206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.474 [2024-07-15 20:37:02.934217] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.474 [2024-07-15 20:37:02.934227] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.474 [2024-07-15 20:37:02.934267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.733 [2024-07-15 20:37:03.075655] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.733 [2024-07-15 20:37:03.083844] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.733 null0 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.733 null1 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4173532 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4173532 /tmp/host.sock 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 4173532 ']' 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:24.733 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:24.733 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.733 [2024-07-15 20:37:03.159550] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:31:24.733 [2024-07-15 20:37:03.159639] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173532 ] 00:31:24.733 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.733 [2024-07-15 20:37:03.217610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.991 [2024-07-15 20:37:03.306267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.991 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.249 [2024-07-15 20:37:03.697523] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:25.249 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:31:25.507 20:37:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:26.072 [2024-07-15 20:37:04.484972] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:26.072 [2024-07-15 20:37:04.484999] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:26.072 [2024-07-15 20:37:04.485035] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:26.330 [2024-07-15 20:37:04.612489] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:26.330 [2024-07-15 20:37:04.715101] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:26.330 [2024-07-15 20:37:04.715125] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:26.588 20:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.588 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:26.589 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.846 [2024-07-15 20:37:05.346776] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:26.846 [2024-07-15 20:37:05.347958] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:26.846 [2024-07-15 20:37:05.347996] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:26.846 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:27.104 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:27.105 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:27.105 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.105 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.105 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:27.105 20:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:27.105 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.105 [2024-07-15 20:37:05.475805] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:27.105 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:27.105 20:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:27.105 [2024-07-15 20:37:05.578562] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:27.105 [2024-07-15 20:37:05.578585] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:27.105 [2024-07-15 20:37:05.578594] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.036 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.294 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:28.294 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:28.294 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:28.294 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.294 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:28.294 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.294 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.294 [2024-07-15 20:37:06.582947] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:28.294 [2024-07-15 20:37:06.582987] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:28.294 [2024-07-15 20:37:06.585024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:28.294 [2024-07-15 20:37:06.585057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.294 [2024-07-15 20:37:06.585075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:28.294 [2024-07-15 20:37:06.585089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.294 [2024-07-15 20:37:06.585104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:28.294 [2024-07-15 20:37:06.585117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.295 [2024-07-15 20:37:06.585142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:28.295 [2024-07-15 20:37:06.585155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.295 [2024-07-15 20:37:06.585169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13530 is same with the state(5) to be set 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:28.295 [2024-07-15 20:37:06.595019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf13530 (9): Bad file descriptor 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.295 [2024-07-15 20:37:06.605058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.295 [2024-07-15 20:37:06.605357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.295 [2024-07-15 20:37:06.605392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf13530 with addr=10.0.0.2, port=4420 00:31:28.295 [2024-07-15 20:37:06.605410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13530 is same with the state(5) to be set 00:31:28.295 [2024-07-15 20:37:06.605437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf13530 (9): Bad file descriptor 00:31:28.295 [2024-07-15 20:37:06.605461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.295 [2024-07-15 20:37:06.605477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.295 [2024-07-15 20:37:06.605494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.295 [2024-07-15 20:37:06.605517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.295 [2024-07-15 20:37:06.615138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.295 [2024-07-15 20:37:06.615408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.295 [2024-07-15 20:37:06.615436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf13530 with addr=10.0.0.2, port=4420 00:31:28.295 [2024-07-15 20:37:06.615451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13530 is same with the state(5) to be set 00:31:28.295 [2024-07-15 20:37:06.615473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf13530 (9): Bad file descriptor 00:31:28.295 [2024-07-15 20:37:06.615507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.295 [2024-07-15 20:37:06.615525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.295 [2024-07-15 20:37:06.615538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.295 [2024-07-15 20:37:06.615572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.295 [2024-07-15 20:37:06.625237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.295 [2024-07-15 20:37:06.625494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.295 [2024-07-15 20:37:06.625525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf13530 with addr=10.0.0.2, port=4420 00:31:28.295 [2024-07-15 20:37:06.625543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13530 is same with the state(5) to be set 00:31:28.295 [2024-07-15 20:37:06.625568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf13530 (9): Bad file descriptor 00:31:28.295 [2024-07-15 20:37:06.625592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.295 [2024-07-15 20:37:06.625608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.295 [2024-07-15 20:37:06.625623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.295 [2024-07-15 20:37:06.625644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:28.295 [2024-07-15 20:37:06.635323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.295 [2024-07-15 20:37:06.635597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.295 [2024-07-15 20:37:06.635627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf13530 with addr=10.0.0.2, port=4420 00:31:28.295 [2024-07-15 20:37:06.635644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13530 is same with the state(5) to be set 00:31:28.295 [2024-07-15 20:37:06.635667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf13530 (9): Bad file descriptor 00:31:28.295 [2024-07-15 20:37:06.635704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.295 [2024-07-15 20:37:06.635723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.295 [2024-07-15 20:37:06.635738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.295 [2024-07-15 20:37:06.635757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:28.295 [2024-07-15 20:37:06.645400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.295 [2024-07-15 20:37:06.645616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.295 [2024-07-15 20:37:06.645649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf13530 with addr=10.0.0.2, port=4420 00:31:28.295 [2024-07-15 20:37:06.645667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13530 is same with the state(5) to be set 00:31:28.295 [2024-07-15 20:37:06.645693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf13530 (9): Bad file descriptor 00:31:28.295 [2024-07-15 20:37:06.645730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.295 [2024-07-15 20:37:06.645750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.295 [2024-07-15 20:37:06.645766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.295 [2024-07-15 20:37:06.645789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.295 [2024-07-15 20:37:06.655481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.295 [2024-07-15 20:37:06.655724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.295 [2024-07-15 20:37:06.655755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf13530 with addr=10.0.0.2, port=4420 00:31:28.295 [2024-07-15 20:37:06.655780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13530 is same with the state(5) to be set 00:31:28.295 [2024-07-15 20:37:06.655805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf13530 (9): Bad file descriptor 00:31:28.295 [2024-07-15 20:37:06.655857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.295 [2024-07-15 20:37:06.655888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.295 [2024-07-15 20:37:06.655921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.295 [2024-07-15 20:37:06.655942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.295 [2024-07-15 20:37:06.665558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.295 [2024-07-15 20:37:06.665769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.295 [2024-07-15 20:37:06.665800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf13530 with addr=10.0.0.2, port=4420 00:31:28.295 [2024-07-15 20:37:06.665817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13530 is same with the state(5) to be set 00:31:28.295 [2024-07-15 20:37:06.665841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf13530 (9): Bad file descriptor 00:31:28.295 [2024-07-15 20:37:06.665864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.295 [2024-07-15 20:37:06.665899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.295 [2024-07-15 20:37:06.665930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.295 [2024-07-15 20:37:06.665964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.295 [2024-07-15 20:37:06.670940] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:28.295 [2024-07-15 20:37:06.670969] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.295 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:28.296 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.553 20:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.483 [2024-07-15 20:37:07.954077] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:29.483 [2024-07-15 20:37:07.954099] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:29.483 [2024-07-15 20:37:07.954119] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:29.741 [2024-07-15 20:37:08.041411] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:29.741 [2024-07-15 20:37:08.107497] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:29.741 [2024-07-15 20:37:08.107543] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.741 request: 00:31:29.741 { 00:31:29.741 "name": "nvme", 00:31:29.741 "trtype": "tcp", 00:31:29.741 "traddr": "10.0.0.2", 00:31:29.741 "adrfam": "ipv4", 00:31:29.741 "trsvcid": "8009", 00:31:29.741 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:29.741 "wait_for_attach": true, 00:31:29.741 "method": "bdev_nvme_start_discovery", 00:31:29.741 "req_id": 1 00:31:29.741 } 00:31:29.741 Got JSON-RPC error response 00:31:29.741 response: 00:31:29.741 { 00:31:29.741 "code": -17, 00:31:29.741 "message": "File exists" 00:31:29.741 } 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.741 request: 00:31:29.741 { 00:31:29.741 "name": "nvme_second", 00:31:29.741 "trtype": "tcp", 00:31:29.741 "traddr": "10.0.0.2", 00:31:29.741 "adrfam": "ipv4", 00:31:29.741 "trsvcid": "8009", 00:31:29.741 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:29.741 "wait_for_attach": true, 00:31:29.741 "method": "bdev_nvme_start_discovery", 00:31:29.741 "req_id": 1 00:31:29.741 } 00:31:29.741 Got JSON-RPC error response 00:31:29.741 response: 00:31:29.741 { 00:31:29.741 "code": -17, 00:31:29.741 "message": "File exists" 00:31:29.741 } 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.741 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.999 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.999 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:29.999 20:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:29.999 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:29.999 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:29.999 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:29.999 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:29.999 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:29.999 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:29.999 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:29.999 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.999 20:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.930 [2024-07-15 20:37:09.311024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.930 [2024-07-15 20:37:09.311095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4eec0 with addr=10.0.0.2, port=8010 00:31:30.930 [2024-07-15 20:37:09.311125] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:30.930 [2024-07-15 20:37:09.311140] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:30.930 [2024-07-15 20:37:09.311153] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:31.897 [2024-07-15 20:37:10.313506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.897 [2024-07-15 20:37:10.313572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4eec0 with addr=10.0.0.2, port=8010 00:31:31.897 [2024-07-15 20:37:10.313606] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:31.897 [2024-07-15 20:37:10.313623] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:31.897 [2024-07-15 20:37:10.313638] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:32.831 [2024-07-15 20:37:11.315616] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:32.831 request: 00:31:32.831 { 00:31:32.831 "name": "nvme_second", 00:31:32.831 "trtype": "tcp", 00:31:32.831 "traddr": "10.0.0.2", 00:31:32.831 "adrfam": "ipv4", 00:31:32.831 "trsvcid": "8010", 00:31:32.831 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:32.831 "wait_for_attach": false, 00:31:32.831 "attach_timeout_ms": 3000, 00:31:32.831 "method": "bdev_nvme_start_discovery", 00:31:32.831 "req_id": 1 00:31:32.831 } 00:31:32.831 Got JSON-RPC error response 00:31:32.831 response: 00:31:32.831 { 00:31:32.831 "code": -110, 00:31:32.831 "message": "Connection timed out" 00:31:32.831 } 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:32.831 20:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4173532 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:33.089 rmmod nvme_tcp 00:31:33.089 rmmod nvme_fabrics 00:31:33.089 rmmod nvme_keyring 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 4173507 ']' 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 4173507 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 4173507 ']' 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 4173507 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4173507 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4173507' 00:31:33.089 killing process with pid 4173507 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 4173507 00:31:33.089 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 4173507 00:31:33.347 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:33.347 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:33.347 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:33.347 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:33.347 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:33.347 20:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.347 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:33.347 20:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.249 20:37:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:35.249 00:31:35.249 real 0m13.089s 00:31:35.249 user 0m18.996s 00:31:35.249 sys 0m2.784s 00:31:35.249 20:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:35.249 20:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.249 ************************************ 00:31:35.249 END TEST nvmf_host_discovery 00:31:35.249 ************************************ 00:31:35.249 20:37:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:35.249 20:37:13 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:35.249 20:37:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:35.249 20:37:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:35.249 20:37:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:35.249 ************************************ 00:31:35.249 START TEST nvmf_host_multipath_status 00:31:35.249 ************************************ 00:31:35.249 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:35.506 * Looking for test storage... 00:31:35.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:35.506 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:35.507 20:37:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:37.408 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:37.408 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:37.408 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:37.409 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:37.409 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:37.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:37.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:31:37.409 00:31:37.409 --- 10.0.0.2 ping statistics --- 00:31:37.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.409 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:37.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:37.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:31:37.409 00:31:37.409 --- 10.0.0.1 ping statistics --- 00:31:37.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.409 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=4176565 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 4176565 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 4176565 ']' 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:37.409 20:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:37.409 [2024-07-15 20:37:15.844760] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:31:37.409 [2024-07-15 20:37:15.844858] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:37.409 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.409 [2024-07-15 20:37:15.916349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:37.667 [2024-07-15 20:37:16.010198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.667 [2024-07-15 20:37:16.010270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.667 [2024-07-15 20:37:16.010293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:37.667 [2024-07-15 20:37:16.010307] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:37.667 [2024-07-15 20:37:16.010318] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.667 [2024-07-15 20:37:16.010499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.667 [2024-07-15 20:37:16.010505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.667 20:37:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:37.667 20:37:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:37.667 20:37:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:37.667 20:37:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:37.667 20:37:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:37.667 20:37:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.667 20:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4176565 00:31:37.667 20:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:37.924 [2024-07-15 20:37:16.387837] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.924 20:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:38.182 Malloc0 00:31:38.182 20:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:38.439 20:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:38.696 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.954 [2024-07-15 20:37:17.411011] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.954 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:39.212 [2024-07-15 20:37:17.651595] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:39.212 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4176843 00:31:39.212 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:39.212 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:39.212 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4176843 /var/tmp/bdevperf.sock 00:31:39.212 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 4176843 ']' 00:31:39.212 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:39.212 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:39.212 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:39.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:39.212 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:39.212 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:39.469 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:39.469 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:39.469 20:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:39.726 20:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:40.291 Nvme0n1 00:31:40.291 20:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:40.549 Nvme0n1 00:31:40.549 20:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:40.549 20:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:43.077 20:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:43.077 20:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:43.077 20:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:43.077 20:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:44.449 20:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:44.449 20:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:44.449 20:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.449 20:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:44.449 20:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.449 20:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:44.449 20:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.449 20:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:44.706 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:44.706 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:44.706 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.706 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:44.973 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.973 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:44.973 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.973 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:45.231 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.231 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:45.231 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.231 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:45.488 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.488 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:45.488 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.488 20:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:45.745 20:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.745 20:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:45.745 20:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:46.003 20:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:46.265 20:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:47.254 20:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:47.254 20:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:47.254 20:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.254 20:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:47.511 20:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:47.511 20:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:47.512 20:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.512 20:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:47.779 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.779 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:47.779 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.779 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:48.037 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.037 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:48.037 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.037 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:48.295 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.295 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:48.295 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.295 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:48.565 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.565 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:48.565 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.565 20:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:48.828 20:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.828 20:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:48.828 20:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:48.828 20:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:49.086 20:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:50.461 20:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:50.461 20:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:50.461 20:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.461 20:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:50.461 20:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.461 20:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:50.461 20:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.461 20:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:50.719 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:50.719 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:50.719 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.719 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:50.977 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.977 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:50.977 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.977 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:51.234 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.234 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:51.234 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.234 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:51.491 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.491 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:51.491 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.491 20:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:51.749 20:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.749 20:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:51.749 20:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:52.007 20:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:52.266 20:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:53.200 20:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:53.200 20:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:53.200 20:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.200 20:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:53.458 20:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.458 20:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:53.458 20:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.458 20:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:53.716 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:53.716 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:53.716 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.716 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:53.974 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.974 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:53.974 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.974 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:54.231 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.232 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:54.232 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.232 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:54.490 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.490 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:54.490 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.490 20:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:54.748 20:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:54.748 20:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:54.748 20:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:55.006 20:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:55.263 20:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:56.194 20:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:56.194 20:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:56.194 20:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.194 20:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:56.451 20:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:56.451 20:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:56.451 20:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.451 20:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:56.708 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:56.708 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:56.708 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.708 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:56.965 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.965 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:56.965 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.965 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:57.222 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.222 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:57.222 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.222 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:57.480 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:57.480 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:57.480 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.480 20:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:57.738 20:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:57.738 20:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:57.738 20:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:57.995 20:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:58.252 20:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:59.184 20:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:59.184 20:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:59.184 20:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.184 20:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:59.442 20:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:59.442 20:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:59.442 20:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.442 20:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:59.699 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.699 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:59.699 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.699 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:59.958 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.958 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:59.958 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.958 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:00.223 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.223 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:00.223 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.223 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:00.481 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:00.481 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:00.481 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.481 20:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:00.738 20:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.738 20:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:00.996 20:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:00.996 20:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:01.253 20:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:01.510 20:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:02.443 20:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:02.443 20:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:02.443 20:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.443 20:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:02.701 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.702 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:02.702 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.702 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:02.959 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.959 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:02.959 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.959 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:03.217 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:03.217 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:03.217 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.217 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:03.475 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:03.475 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:03.475 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.475 20:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:03.732 20:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:03.732 20:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:03.732 20:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.732 20:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:03.990 20:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:03.990 20:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:03.990 20:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:04.247 20:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:04.504 20:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:05.436 20:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:05.719 20:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:05.719 20:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.719 20:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:05.719 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:05.719 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:05.719 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.719 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:05.976 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.976 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:05.976 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.976 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:06.234 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:06.234 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:06.234 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.234 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:06.492 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:06.492 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:06.492 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.492 20:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:06.751 20:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:06.751 20:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:06.751 20:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.751 20:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:07.009 20:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.009 20:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:07.009 20:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:07.267 20:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:07.524 20:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:08.455 20:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:08.455 20:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:08.455 20:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.455 20:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:08.712 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.712 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:08.712 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.712 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:08.969 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.969 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:08.969 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.969 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:09.226 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.226 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:09.226 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.226 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:09.484 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.484 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:09.484 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.484 20:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:09.742 20:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.742 20:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:09.742 20:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.742 20:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:09.999 20:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.999 20:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:09.999 20:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:10.257 20:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:10.515 20:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:11.468 20:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:11.468 20:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:11.468 20:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.468 20:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:11.726 20:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.726 20:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:11.726 20:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.726 20:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:11.983 20:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:11.983 20:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:11.983 20:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.983 20:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:12.240 20:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.240 20:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:12.240 20:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.240 20:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:12.498 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.498 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:12.498 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.498 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:12.756 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.756 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:12.756 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.756 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:13.014 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:13.014 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4176843 00:32:13.014 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 4176843 ']' 00:32:13.014 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 4176843 00:32:13.014 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:13.014 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:13.014 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4176843 00:32:13.014 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:13.014 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:13.014 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4176843' 00:32:13.014 killing process with pid 4176843 00:32:13.014 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 4176843 00:32:13.310 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 4176843 00:32:13.310 Connection closed with partial response: 00:32:13.310 00:32:13.310 00:32:13.310 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4176843 00:32:13.310 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:13.310 [2024-07-15 20:37:17.707826] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:32:13.310 [2024-07-15 20:37:17.707941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176843 ] 00:32:13.310 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.310 [2024-07-15 20:37:17.768775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.310 [2024-07-15 20:37:17.855423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:13.310 Running I/O for 90 seconds... 00:32:13.310 [2024-07-15 20:37:33.405162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.310 [2024-07-15 20:37:33.405236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:13.310 [2024-07-15 20:37:33.405314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.310 [2024-07-15 20:37:33.405335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:13.310 [2024-07-15 20:37:33.405358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.310 [2024-07-15 20:37:33.405375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.405396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.405412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.405434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.405450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.405472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.405488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.405512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.405528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.405550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.405566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.405587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.405603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.405624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.405642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.405664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.405692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.405716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.405732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.405769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.405786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.405807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.405822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.405973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.405996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.406023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.406040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.406065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.406081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.406104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.406120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.406142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.406157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.406179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.406194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.406216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.406245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.406268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.406284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.406305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.406320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.406886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.406910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.406954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.311 [2024-07-15 20:37:33.406973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.406998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:13.311 [2024-07-15 20:37:33.407856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.311 [2024-07-15 20:37:33.407871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.407920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.407937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.407960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.407976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.312 [2024-07-15 20:37:33.408060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.312 [2024-07-15 20:37:33.408100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.312 [2024-07-15 20:37:33.408154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.408960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.408984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.409001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.409043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.409085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.409124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.409164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.409217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.409257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.409294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.409333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.312 [2024-07-15 20:37:33.409371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.312 [2024-07-15 20:37:33.409411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.312 [2024-07-15 20:37:33.409451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.312 [2024-07-15 20:37:33.409597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.312 [2024-07-15 20:37:33.409643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:13.312 [2024-07-15 20:37:33.409675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.312 [2024-07-15 20:37:33.409692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.409718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.409734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.409760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.409776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.409802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.409818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.409854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.409870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.409930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.409948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.409975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.409992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.410962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.410979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.411028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.411075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.411122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.411168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.411230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.411290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.411334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.411378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.411426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.411473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.411517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.411560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.313 [2024-07-15 20:37:33.411604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.313 [2024-07-15 20:37:33.411647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:13.313 [2024-07-15 20:37:33.411674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.313 [2024-07-15 20:37:33.411691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:33.411718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:33.411735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.963898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.963971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.965908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.965936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.965963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.965981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.966004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.966020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.966042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.966058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.966092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.966109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.966131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.966148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.966185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.966201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.966222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.966238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.966259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.966275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.968990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:13.314 [2024-07-15 20:37:48.969597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:13.314 [2024-07-15 20:37:48.969619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.314 [2024-07-15 20:37:48.969635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:13.314 Received shutdown signal, test time was about 32.328424 seconds 00:32:13.314 00:32:13.314 Latency(us) 00:32:13.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.314 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:13.314 Verification LBA range: start 0x0 length 0x4000 00:32:13.314 Nvme0n1 : 32.33 7931.66 30.98 0.00 0.00 16110.59 491.52 4026531.84 00:32:13.314 =================================================================================================================== 00:32:13.314 Total : 7931.66 30.98 0.00 0.00 16110.59 491.52 4026531.84 00:32:13.314 20:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:13.592 rmmod nvme_tcp 00:32:13.592 rmmod nvme_fabrics 00:32:13.592 rmmod nvme_keyring 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 4176565 ']' 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 4176565 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 4176565 ']' 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 4176565 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4176565 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4176565' 00:32:13.592 killing process with pid 4176565 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 4176565 00:32:13.592 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 4176565 00:32:13.850 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:13.850 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:13.850 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:13.850 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:13.850 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:13.850 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.850 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:13.850 20:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.376 20:37:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:16.376 00:32:16.376 real 0m40.671s 00:32:16.376 user 2m1.296s 00:32:16.376 sys 0m11.244s 00:32:16.377 20:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:16.377 20:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:16.377 ************************************ 00:32:16.377 END TEST nvmf_host_multipath_status 00:32:16.377 ************************************ 00:32:16.377 20:37:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:16.377 20:37:54 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:16.377 20:37:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:16.377 20:37:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:16.377 20:37:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:16.377 ************************************ 00:32:16.377 START TEST nvmf_discovery_remove_ifc 00:32:16.377 ************************************ 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:16.377 * Looking for test storage... 00:32:16.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:16.377 20:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:18.273 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:18.273 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.273 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:18.273 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:18.274 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:18.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:18.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:32:18.274 00:32:18.274 --- 10.0.0.2 ping statistics --- 00:32:18.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.274 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:18.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:18.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:32:18.274 00:32:18.274 --- 10.0.0.1 ping statistics --- 00:32:18.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.274 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=4183020 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 4183020 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 4183020 ']' 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:18.274 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.274 [2024-07-15 20:37:56.598475] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:32:18.274 [2024-07-15 20:37:56.598558] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.274 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.274 [2024-07-15 20:37:56.667340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.274 [2024-07-15 20:37:56.756399] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.274 [2024-07-15 20:37:56.756465] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.274 [2024-07-15 20:37:56.756491] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:18.274 [2024-07-15 20:37:56.756504] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:18.274 [2024-07-15 20:37:56.756516] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.274 [2024-07-15 20:37:56.756558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.532 [2024-07-15 20:37:56.916264] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.532 [2024-07-15 20:37:56.924468] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:18.532 null0 00:32:18.532 [2024-07-15 20:37:56.956375] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4183044 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4183044 /tmp/host.sock 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 4183044 ']' 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:18.532 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:18.532 20:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.532 [2024-07-15 20:37:57.022794] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:32:18.532 [2024-07-15 20:37:57.022889] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4183044 ] 00:32:18.532 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.790 [2024-07-15 20:37:57.084227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.790 [2024-07-15 20:37:57.174411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.790 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:18.790 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:18.790 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:18.790 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:18.790 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.790 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.790 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.790 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:18.790 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.790 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.047 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.047 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:19.047 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.047 20:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:20.002 [2024-07-15 20:37:58.385083] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:20.002 [2024-07-15 20:37:58.385109] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:20.002 [2024-07-15 20:37:58.385130] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:20.002 [2024-07-15 20:37:58.472429] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:20.260 [2024-07-15 20:37:58.696731] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:20.260 [2024-07-15 20:37:58.696800] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:20.260 [2024-07-15 20:37:58.696844] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:20.260 [2024-07-15 20:37:58.696870] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:20.260 [2024-07-15 20:37:58.696925] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:20.260 [2024-07-15 20:37:58.702940] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2288300 was disconnected and freed. delete nvme_qpair. 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:20.260 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:20.517 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:20.517 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:20.517 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.517 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:20.517 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:20.517 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:20.517 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:20.517 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.517 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:20.517 20:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:21.449 20:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:21.449 20:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:21.449 20:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:21.449 20:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.449 20:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.449 20:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:21.449 20:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:21.449 20:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.449 20:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:21.449 20:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:22.381 20:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:22.381 20:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:22.381 20:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:22.381 20:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.381 20:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:22.381 20:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:22.381 20:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:22.381 20:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.639 20:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:22.639 20:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:23.571 20:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:23.571 20:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.571 20:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.571 20:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:23.571 20:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.571 20:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:23.571 20:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:23.571 20:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.571 20:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:23.571 20:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:24.502 20:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:24.502 20:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.502 20:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:24.502 20:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.502 20:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:24.502 20:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.502 20:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:24.502 20:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.502 20:38:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:24.502 20:38:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:25.873 20:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:25.873 20:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.873 20:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.873 20:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:25.873 20:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.873 20:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:25.873 20:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:25.873 20:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.873 20:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:25.873 20:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:25.873 [2024-07-15 20:38:04.137693] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:25.873 [2024-07-15 20:38:04.137756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.873 [2024-07-15 20:38:04.137777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.873 [2024-07-15 20:38:04.137794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.873 [2024-07-15 20:38:04.137806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.873 [2024-07-15 20:38:04.137819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.873 [2024-07-15 20:38:04.137832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.873 [2024-07-15 20:38:04.137846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.873 [2024-07-15 20:38:04.137858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.873 [2024-07-15 20:38:04.137872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.873 [2024-07-15 20:38:04.137906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.873 [2024-07-15 20:38:04.137921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224eb40 is same with the state(5) to be set 00:32:25.873 [2024-07-15 20:38:04.147710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224eb40 (9): Bad file descriptor 00:32:25.873 [2024-07-15 20:38:04.157753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:26.804 20:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:26.804 20:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:26.804 20:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:26.804 20:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.804 20:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:26.804 20:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:26.804 20:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:26.804 [2024-07-15 20:38:05.202920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:26.804 [2024-07-15 20:38:05.202988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224eb40 with addr=10.0.0.2, port=4420 00:32:26.804 [2024-07-15 20:38:05.203014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224eb40 is same with the state(5) to be set 00:32:26.804 [2024-07-15 20:38:05.203063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224eb40 (9): Bad file descriptor 00:32:26.804 [2024-07-15 20:38:05.203543] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:26.804 [2024-07-15 20:38:05.203579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:26.804 [2024-07-15 20:38:05.203597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:26.804 [2024-07-15 20:38:05.203615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:26.804 [2024-07-15 20:38:05.203658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.804 [2024-07-15 20:38:05.203678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:26.804 20:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.804 20:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:26.804 20:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:27.737 [2024-07-15 20:38:06.206192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.737 [2024-07-15 20:38:06.206265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.737 [2024-07-15 20:38:06.206280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.737 [2024-07-15 20:38:06.206294] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:27.737 [2024-07-15 20:38:06.206324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.737 [2024-07-15 20:38:06.206363] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:27.737 [2024-07-15 20:38:06.206434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.737 [2024-07-15 20:38:06.206455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.737 [2024-07-15 20:38:06.206475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.737 [2024-07-15 20:38:06.206488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.737 [2024-07-15 20:38:06.206503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.737 [2024-07-15 20:38:06.206516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.737 [2024-07-15 20:38:06.206529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.737 [2024-07-15 20:38:06.206545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.737 [2024-07-15 20:38:06.206559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.737 [2024-07-15 20:38:06.206573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.737 [2024-07-15 20:38:06.206586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:27.737 [2024-07-15 20:38:06.206658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224df80 (9): Bad file descriptor 00:32:27.737 [2024-07-15 20:38:06.207653] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:27.737 [2024-07-15 20:38:06.207673] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:27.737 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:27.737 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.737 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:27.737 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.737 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:27.737 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:27.737 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:27.737 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:27.994 20:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:28.926 20:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:28.926 20:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.926 20:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:28.926 20:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.926 20:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:28.926 20:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:28.926 20:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:28.926 20:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.926 20:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:28.926 20:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:29.890 [2024-07-15 20:38:08.220173] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:29.890 [2024-07-15 20:38:08.220221] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:29.890 [2024-07-15 20:38:08.220245] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:29.890 [2024-07-15 20:38:08.307526] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:29.890 [2024-07-15 20:38:08.370290] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:29.890 [2024-07-15 20:38:08.370344] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:29.890 [2024-07-15 20:38:08.370380] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:29.890 [2024-07-15 20:38:08.370406] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:29.890 [2024-07-15 20:38:08.370420] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:29.890 [2024-07-15 20:38:08.378573] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22653f0 was disconnected and freed. delete nvme_qpair. 00:32:29.890 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:29.890 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.890 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.890 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.890 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:29.890 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:29.891 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:29.891 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.147 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4183044 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 4183044 ']' 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 4183044 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4183044 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4183044' 00:32:30.148 killing process with pid 4183044 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 4183044 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 4183044 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:30.148 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:30.148 rmmod nvme_tcp 00:32:30.405 rmmod nvme_fabrics 00:32:30.405 rmmod nvme_keyring 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 4183020 ']' 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 4183020 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 4183020 ']' 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 4183020 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4183020 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4183020' 00:32:30.405 killing process with pid 4183020 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 4183020 00:32:30.405 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 4183020 00:32:30.664 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:30.664 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:30.664 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:30.664 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:30.664 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:30.664 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.664 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:30.664 20:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.563 20:38:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:32.563 00:32:32.563 real 0m16.559s 00:32:32.563 user 0m23.653s 00:32:32.563 sys 0m2.832s 00:32:32.563 20:38:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:32.563 20:38:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:32.563 ************************************ 00:32:32.563 END TEST nvmf_discovery_remove_ifc 00:32:32.563 ************************************ 00:32:32.563 20:38:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:32.563 20:38:11 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:32.563 20:38:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:32.563 20:38:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:32.563 20:38:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.563 ************************************ 00:32:32.563 START TEST nvmf_identify_kernel_target 00:32:32.563 ************************************ 00:32:32.563 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:32.821 * Looking for test storage... 00:32:32.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.821 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:32.822 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.822 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:32.822 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:32.822 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:32.822 20:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:34.723 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:34.723 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:34.723 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:34.723 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:34.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:32:34.723 00:32:34.723 --- 10.0.0.2 ping statistics --- 00:32:34.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.723 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:32:34.723 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:32:34.982 00:32:34.982 --- 10.0.0.1 ping statistics --- 00:32:34.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.982 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:34.982 20:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:35.918 Waiting for block devices as requested 00:32:36.196 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:36.196 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:36.196 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:36.453 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:36.453 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:36.453 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:36.711 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:36.711 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:36.711 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:36.711 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:36.970 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:36.970 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:36.970 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:36.970 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:37.228 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:37.228 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:37.228 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:37.486 No valid GPT data, bailing 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:37.486 00:32:37.486 Discovery Log Number of Records 2, Generation counter 2 00:32:37.486 =====Discovery Log Entry 0====== 00:32:37.486 trtype: tcp 00:32:37.486 adrfam: ipv4 00:32:37.486 subtype: current discovery subsystem 00:32:37.486 treq: not specified, sq flow control disable supported 00:32:37.486 portid: 1 00:32:37.486 trsvcid: 4420 00:32:37.486 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:37.486 traddr: 10.0.0.1 00:32:37.486 eflags: none 00:32:37.486 sectype: none 00:32:37.486 =====Discovery Log Entry 1====== 00:32:37.486 trtype: tcp 00:32:37.486 adrfam: ipv4 00:32:37.486 subtype: nvme subsystem 00:32:37.486 treq: not specified, sq flow control disable supported 00:32:37.486 portid: 1 00:32:37.486 trsvcid: 4420 00:32:37.486 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:37.486 traddr: 10.0.0.1 00:32:37.486 eflags: none 00:32:37.486 sectype: none 00:32:37.486 20:38:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:37.486 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:37.486 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.746 ===================================================== 00:32:37.747 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:37.747 ===================================================== 00:32:37.747 Controller Capabilities/Features 00:32:37.747 ================================ 00:32:37.747 Vendor ID: 0000 00:32:37.747 Subsystem Vendor ID: 0000 00:32:37.747 Serial Number: 09624d4b18c159400a6a 00:32:37.747 Model Number: Linux 00:32:37.747 Firmware Version: 6.7.0-68 00:32:37.747 Recommended Arb Burst: 0 00:32:37.747 IEEE OUI Identifier: 00 00 00 00:32:37.747 Multi-path I/O 00:32:37.747 May have multiple subsystem ports: No 00:32:37.747 May have multiple controllers: No 00:32:37.747 Associated with SR-IOV VF: No 00:32:37.747 Max Data Transfer Size: Unlimited 00:32:37.747 Max Number of Namespaces: 0 00:32:37.747 Max Number of I/O Queues: 1024 00:32:37.747 NVMe Specification Version (VS): 1.3 00:32:37.747 NVMe Specification Version (Identify): 1.3 00:32:37.747 Maximum Queue Entries: 1024 00:32:37.747 Contiguous Queues Required: No 00:32:37.747 Arbitration Mechanisms Supported 00:32:37.747 Weighted Round Robin: Not Supported 00:32:37.747 Vendor Specific: Not Supported 00:32:37.747 Reset Timeout: 7500 ms 00:32:37.747 Doorbell Stride: 4 bytes 00:32:37.747 NVM Subsystem Reset: Not Supported 00:32:37.747 Command Sets Supported 00:32:37.747 NVM Command Set: Supported 00:32:37.747 Boot Partition: Not Supported 00:32:37.747 Memory Page Size Minimum: 4096 bytes 00:32:37.747 Memory Page Size Maximum: 4096 bytes 00:32:37.747 Persistent Memory Region: Not Supported 00:32:37.747 Optional Asynchronous Events Supported 00:32:37.747 Namespace Attribute Notices: Not Supported 00:32:37.747 Firmware Activation Notices: Not Supported 00:32:37.747 ANA Change Notices: Not Supported 00:32:37.747 PLE Aggregate Log Change Notices: Not Supported 00:32:37.747 LBA Status Info Alert Notices: Not Supported 00:32:37.747 EGE Aggregate Log Change Notices: Not Supported 00:32:37.747 Normal NVM Subsystem Shutdown event: Not Supported 00:32:37.747 Zone Descriptor Change Notices: Not Supported 00:32:37.747 Discovery Log Change Notices: Supported 00:32:37.747 Controller Attributes 00:32:37.747 128-bit Host Identifier: Not Supported 00:32:37.747 Non-Operational Permissive Mode: Not Supported 00:32:37.747 NVM Sets: Not Supported 00:32:37.747 Read Recovery Levels: Not Supported 00:32:37.747 Endurance Groups: Not Supported 00:32:37.747 Predictable Latency Mode: Not Supported 00:32:37.747 Traffic Based Keep ALive: Not Supported 00:32:37.747 Namespace Granularity: Not Supported 00:32:37.747 SQ Associations: Not Supported 00:32:37.747 UUID List: Not Supported 00:32:37.747 Multi-Domain Subsystem: Not Supported 00:32:37.747 Fixed Capacity Management: Not Supported 00:32:37.747 Variable Capacity Management: Not Supported 00:32:37.747 Delete Endurance Group: Not Supported 00:32:37.747 Delete NVM Set: Not Supported 00:32:37.747 Extended LBA Formats Supported: Not Supported 00:32:37.747 Flexible Data Placement Supported: Not Supported 00:32:37.747 00:32:37.747 Controller Memory Buffer Support 00:32:37.747 ================================ 00:32:37.747 Supported: No 00:32:37.747 00:32:37.747 Persistent Memory Region Support 00:32:37.747 ================================ 00:32:37.747 Supported: No 00:32:37.747 00:32:37.747 Admin Command Set Attributes 00:32:37.747 ============================ 00:32:37.747 Security Send/Receive: Not Supported 00:32:37.747 Format NVM: Not Supported 00:32:37.747 Firmware Activate/Download: Not Supported 00:32:37.747 Namespace Management: Not Supported 00:32:37.747 Device Self-Test: Not Supported 00:32:37.747 Directives: Not Supported 00:32:37.747 NVMe-MI: Not Supported 00:32:37.747 Virtualization Management: Not Supported 00:32:37.747 Doorbell Buffer Config: Not Supported 00:32:37.747 Get LBA Status Capability: Not Supported 00:32:37.747 Command & Feature Lockdown Capability: Not Supported 00:32:37.747 Abort Command Limit: 1 00:32:37.747 Async Event Request Limit: 1 00:32:37.747 Number of Firmware Slots: N/A 00:32:37.747 Firmware Slot 1 Read-Only: N/A 00:32:37.747 Firmware Activation Without Reset: N/A 00:32:37.747 Multiple Update Detection Support: N/A 00:32:37.747 Firmware Update Granularity: No Information Provided 00:32:37.747 Per-Namespace SMART Log: No 00:32:37.747 Asymmetric Namespace Access Log Page: Not Supported 00:32:37.747 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:37.747 Command Effects Log Page: Not Supported 00:32:37.747 Get Log Page Extended Data: Supported 00:32:37.747 Telemetry Log Pages: Not Supported 00:32:37.747 Persistent Event Log Pages: Not Supported 00:32:37.747 Supported Log Pages Log Page: May Support 00:32:37.747 Commands Supported & Effects Log Page: Not Supported 00:32:37.747 Feature Identifiers & Effects Log Page:May Support 00:32:37.747 NVMe-MI Commands & Effects Log Page: May Support 00:32:37.747 Data Area 4 for Telemetry Log: Not Supported 00:32:37.747 Error Log Page Entries Supported: 1 00:32:37.747 Keep Alive: Not Supported 00:32:37.747 00:32:37.747 NVM Command Set Attributes 00:32:37.747 ========================== 00:32:37.747 Submission Queue Entry Size 00:32:37.747 Max: 1 00:32:37.747 Min: 1 00:32:37.747 Completion Queue Entry Size 00:32:37.747 Max: 1 00:32:37.747 Min: 1 00:32:37.747 Number of Namespaces: 0 00:32:37.747 Compare Command: Not Supported 00:32:37.747 Write Uncorrectable Command: Not Supported 00:32:37.747 Dataset Management Command: Not Supported 00:32:37.747 Write Zeroes Command: Not Supported 00:32:37.747 Set Features Save Field: Not Supported 00:32:37.747 Reservations: Not Supported 00:32:37.747 Timestamp: Not Supported 00:32:37.747 Copy: Not Supported 00:32:37.747 Volatile Write Cache: Not Present 00:32:37.747 Atomic Write Unit (Normal): 1 00:32:37.747 Atomic Write Unit (PFail): 1 00:32:37.747 Atomic Compare & Write Unit: 1 00:32:37.747 Fused Compare & Write: Not Supported 00:32:37.747 Scatter-Gather List 00:32:37.747 SGL Command Set: Supported 00:32:37.747 SGL Keyed: Not Supported 00:32:37.747 SGL Bit Bucket Descriptor: Not Supported 00:32:37.747 SGL Metadata Pointer: Not Supported 00:32:37.747 Oversized SGL: Not Supported 00:32:37.747 SGL Metadata Address: Not Supported 00:32:37.747 SGL Offset: Supported 00:32:37.747 Transport SGL Data Block: Not Supported 00:32:37.747 Replay Protected Memory Block: Not Supported 00:32:37.747 00:32:37.747 Firmware Slot Information 00:32:37.747 ========================= 00:32:37.747 Active slot: 0 00:32:37.747 00:32:37.747 00:32:37.747 Error Log 00:32:37.747 ========= 00:32:37.747 00:32:37.747 Active Namespaces 00:32:37.747 ================= 00:32:37.747 Discovery Log Page 00:32:37.747 ================== 00:32:37.747 Generation Counter: 2 00:32:37.747 Number of Records: 2 00:32:37.747 Record Format: 0 00:32:37.747 00:32:37.747 Discovery Log Entry 0 00:32:37.747 ---------------------- 00:32:37.747 Transport Type: 3 (TCP) 00:32:37.747 Address Family: 1 (IPv4) 00:32:37.747 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:37.747 Entry Flags: 00:32:37.747 Duplicate Returned Information: 0 00:32:37.747 Explicit Persistent Connection Support for Discovery: 0 00:32:37.747 Transport Requirements: 00:32:37.747 Secure Channel: Not Specified 00:32:37.747 Port ID: 1 (0x0001) 00:32:37.747 Controller ID: 65535 (0xffff) 00:32:37.747 Admin Max SQ Size: 32 00:32:37.747 Transport Service Identifier: 4420 00:32:37.747 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:37.747 Transport Address: 10.0.0.1 00:32:37.747 Discovery Log Entry 1 00:32:37.747 ---------------------- 00:32:37.747 Transport Type: 3 (TCP) 00:32:37.747 Address Family: 1 (IPv4) 00:32:37.747 Subsystem Type: 2 (NVM Subsystem) 00:32:37.747 Entry Flags: 00:32:37.747 Duplicate Returned Information: 0 00:32:37.747 Explicit Persistent Connection Support for Discovery: 0 00:32:37.747 Transport Requirements: 00:32:37.747 Secure Channel: Not Specified 00:32:37.747 Port ID: 1 (0x0001) 00:32:37.747 Controller ID: 65535 (0xffff) 00:32:37.747 Admin Max SQ Size: 32 00:32:37.747 Transport Service Identifier: 4420 00:32:37.747 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:37.747 Transport Address: 10.0.0.1 00:32:37.747 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:37.747 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.747 get_feature(0x01) failed 00:32:37.747 get_feature(0x02) failed 00:32:37.747 get_feature(0x04) failed 00:32:37.747 ===================================================== 00:32:37.747 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:37.747 ===================================================== 00:32:37.747 Controller Capabilities/Features 00:32:37.747 ================================ 00:32:37.747 Vendor ID: 0000 00:32:37.747 Subsystem Vendor ID: 0000 00:32:37.747 Serial Number: 44102af430e48bf324a2 00:32:37.747 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:37.747 Firmware Version: 6.7.0-68 00:32:37.748 Recommended Arb Burst: 6 00:32:37.748 IEEE OUI Identifier: 00 00 00 00:32:37.748 Multi-path I/O 00:32:37.748 May have multiple subsystem ports: Yes 00:32:37.748 May have multiple controllers: Yes 00:32:37.748 Associated with SR-IOV VF: No 00:32:37.748 Max Data Transfer Size: Unlimited 00:32:37.748 Max Number of Namespaces: 1024 00:32:37.748 Max Number of I/O Queues: 128 00:32:37.748 NVMe Specification Version (VS): 1.3 00:32:37.748 NVMe Specification Version (Identify): 1.3 00:32:37.748 Maximum Queue Entries: 1024 00:32:37.748 Contiguous Queues Required: No 00:32:37.748 Arbitration Mechanisms Supported 00:32:37.748 Weighted Round Robin: Not Supported 00:32:37.748 Vendor Specific: Not Supported 00:32:37.748 Reset Timeout: 7500 ms 00:32:37.748 Doorbell Stride: 4 bytes 00:32:37.748 NVM Subsystem Reset: Not Supported 00:32:37.748 Command Sets Supported 00:32:37.748 NVM Command Set: Supported 00:32:37.748 Boot Partition: Not Supported 00:32:37.748 Memory Page Size Minimum: 4096 bytes 00:32:37.748 Memory Page Size Maximum: 4096 bytes 00:32:37.748 Persistent Memory Region: Not Supported 00:32:37.748 Optional Asynchronous Events Supported 00:32:37.748 Namespace Attribute Notices: Supported 00:32:37.748 Firmware Activation Notices: Not Supported 00:32:37.748 ANA Change Notices: Supported 00:32:37.748 PLE Aggregate Log Change Notices: Not Supported 00:32:37.748 LBA Status Info Alert Notices: Not Supported 00:32:37.748 EGE Aggregate Log Change Notices: Not Supported 00:32:37.748 Normal NVM Subsystem Shutdown event: Not Supported 00:32:37.748 Zone Descriptor Change Notices: Not Supported 00:32:37.748 Discovery Log Change Notices: Not Supported 00:32:37.748 Controller Attributes 00:32:37.748 128-bit Host Identifier: Supported 00:32:37.748 Non-Operational Permissive Mode: Not Supported 00:32:37.748 NVM Sets: Not Supported 00:32:37.748 Read Recovery Levels: Not Supported 00:32:37.748 Endurance Groups: Not Supported 00:32:37.748 Predictable Latency Mode: Not Supported 00:32:37.748 Traffic Based Keep ALive: Supported 00:32:37.748 Namespace Granularity: Not Supported 00:32:37.748 SQ Associations: Not Supported 00:32:37.748 UUID List: Not Supported 00:32:37.748 Multi-Domain Subsystem: Not Supported 00:32:37.748 Fixed Capacity Management: Not Supported 00:32:37.748 Variable Capacity Management: Not Supported 00:32:37.748 Delete Endurance Group: Not Supported 00:32:37.748 Delete NVM Set: Not Supported 00:32:37.748 Extended LBA Formats Supported: Not Supported 00:32:37.748 Flexible Data Placement Supported: Not Supported 00:32:37.748 00:32:37.748 Controller Memory Buffer Support 00:32:37.748 ================================ 00:32:37.748 Supported: No 00:32:37.748 00:32:37.748 Persistent Memory Region Support 00:32:37.748 ================================ 00:32:37.748 Supported: No 00:32:37.748 00:32:37.748 Admin Command Set Attributes 00:32:37.748 ============================ 00:32:37.748 Security Send/Receive: Not Supported 00:32:37.748 Format NVM: Not Supported 00:32:37.748 Firmware Activate/Download: Not Supported 00:32:37.748 Namespace Management: Not Supported 00:32:37.748 Device Self-Test: Not Supported 00:32:37.748 Directives: Not Supported 00:32:37.748 NVMe-MI: Not Supported 00:32:37.748 Virtualization Management: Not Supported 00:32:37.748 Doorbell Buffer Config: Not Supported 00:32:37.748 Get LBA Status Capability: Not Supported 00:32:37.748 Command & Feature Lockdown Capability: Not Supported 00:32:37.748 Abort Command Limit: 4 00:32:37.748 Async Event Request Limit: 4 00:32:37.748 Number of Firmware Slots: N/A 00:32:37.748 Firmware Slot 1 Read-Only: N/A 00:32:37.748 Firmware Activation Without Reset: N/A 00:32:37.748 Multiple Update Detection Support: N/A 00:32:37.748 Firmware Update Granularity: No Information Provided 00:32:37.748 Per-Namespace SMART Log: Yes 00:32:37.748 Asymmetric Namespace Access Log Page: Supported 00:32:37.748 ANA Transition Time : 10 sec 00:32:37.748 00:32:37.748 Asymmetric Namespace Access Capabilities 00:32:37.748 ANA Optimized State : Supported 00:32:37.748 ANA Non-Optimized State : Supported 00:32:37.748 ANA Inaccessible State : Supported 00:32:37.748 ANA Persistent Loss State : Supported 00:32:37.748 ANA Change State : Supported 00:32:37.748 ANAGRPID is not changed : No 00:32:37.748 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:37.748 00:32:37.748 ANA Group Identifier Maximum : 128 00:32:37.748 Number of ANA Group Identifiers : 128 00:32:37.748 Max Number of Allowed Namespaces : 1024 00:32:37.748 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:37.748 Command Effects Log Page: Supported 00:32:37.748 Get Log Page Extended Data: Supported 00:32:37.748 Telemetry Log Pages: Not Supported 00:32:37.748 Persistent Event Log Pages: Not Supported 00:32:37.748 Supported Log Pages Log Page: May Support 00:32:37.748 Commands Supported & Effects Log Page: Not Supported 00:32:37.748 Feature Identifiers & Effects Log Page:May Support 00:32:37.748 NVMe-MI Commands & Effects Log Page: May Support 00:32:37.748 Data Area 4 for Telemetry Log: Not Supported 00:32:37.748 Error Log Page Entries Supported: 128 00:32:37.748 Keep Alive: Supported 00:32:37.748 Keep Alive Granularity: 1000 ms 00:32:37.748 00:32:37.748 NVM Command Set Attributes 00:32:37.748 ========================== 00:32:37.748 Submission Queue Entry Size 00:32:37.748 Max: 64 00:32:37.748 Min: 64 00:32:37.748 Completion Queue Entry Size 00:32:37.748 Max: 16 00:32:37.748 Min: 16 00:32:37.748 Number of Namespaces: 1024 00:32:37.748 Compare Command: Not Supported 00:32:37.748 Write Uncorrectable Command: Not Supported 00:32:37.748 Dataset Management Command: Supported 00:32:37.748 Write Zeroes Command: Supported 00:32:37.748 Set Features Save Field: Not Supported 00:32:37.748 Reservations: Not Supported 00:32:37.748 Timestamp: Not Supported 00:32:37.748 Copy: Not Supported 00:32:37.748 Volatile Write Cache: Present 00:32:37.748 Atomic Write Unit (Normal): 1 00:32:37.748 Atomic Write Unit (PFail): 1 00:32:37.748 Atomic Compare & Write Unit: 1 00:32:37.748 Fused Compare & Write: Not Supported 00:32:37.748 Scatter-Gather List 00:32:37.748 SGL Command Set: Supported 00:32:37.748 SGL Keyed: Not Supported 00:32:37.748 SGL Bit Bucket Descriptor: Not Supported 00:32:37.748 SGL Metadata Pointer: Not Supported 00:32:37.748 Oversized SGL: Not Supported 00:32:37.748 SGL Metadata Address: Not Supported 00:32:37.748 SGL Offset: Supported 00:32:37.748 Transport SGL Data Block: Not Supported 00:32:37.748 Replay Protected Memory Block: Not Supported 00:32:37.748 00:32:37.748 Firmware Slot Information 00:32:37.748 ========================= 00:32:37.748 Active slot: 0 00:32:37.748 00:32:37.748 Asymmetric Namespace Access 00:32:37.748 =========================== 00:32:37.748 Change Count : 0 00:32:37.748 Number of ANA Group Descriptors : 1 00:32:37.748 ANA Group Descriptor : 0 00:32:37.748 ANA Group ID : 1 00:32:37.748 Number of NSID Values : 1 00:32:37.748 Change Count : 0 00:32:37.748 ANA State : 1 00:32:37.748 Namespace Identifier : 1 00:32:37.748 00:32:37.748 Commands Supported and Effects 00:32:37.748 ============================== 00:32:37.748 Admin Commands 00:32:37.748 -------------- 00:32:37.748 Get Log Page (02h): Supported 00:32:37.748 Identify (06h): Supported 00:32:37.748 Abort (08h): Supported 00:32:37.748 Set Features (09h): Supported 00:32:37.748 Get Features (0Ah): Supported 00:32:37.748 Asynchronous Event Request (0Ch): Supported 00:32:37.748 Keep Alive (18h): Supported 00:32:37.748 I/O Commands 00:32:37.748 ------------ 00:32:37.748 Flush (00h): Supported 00:32:37.748 Write (01h): Supported LBA-Change 00:32:37.748 Read (02h): Supported 00:32:37.748 Write Zeroes (08h): Supported LBA-Change 00:32:37.748 Dataset Management (09h): Supported 00:32:37.748 00:32:37.748 Error Log 00:32:37.748 ========= 00:32:37.748 Entry: 0 00:32:37.748 Error Count: 0x3 00:32:37.748 Submission Queue Id: 0x0 00:32:37.748 Command Id: 0x5 00:32:37.748 Phase Bit: 0 00:32:37.748 Status Code: 0x2 00:32:37.748 Status Code Type: 0x0 00:32:37.748 Do Not Retry: 1 00:32:37.748 Error Location: 0x28 00:32:37.748 LBA: 0x0 00:32:37.748 Namespace: 0x0 00:32:37.748 Vendor Log Page: 0x0 00:32:37.748 ----------- 00:32:37.748 Entry: 1 00:32:37.748 Error Count: 0x2 00:32:37.748 Submission Queue Id: 0x0 00:32:37.748 Command Id: 0x5 00:32:37.748 Phase Bit: 0 00:32:37.748 Status Code: 0x2 00:32:37.748 Status Code Type: 0x0 00:32:37.748 Do Not Retry: 1 00:32:37.748 Error Location: 0x28 00:32:37.748 LBA: 0x0 00:32:37.748 Namespace: 0x0 00:32:37.748 Vendor Log Page: 0x0 00:32:37.748 ----------- 00:32:37.748 Entry: 2 00:32:37.748 Error Count: 0x1 00:32:37.748 Submission Queue Id: 0x0 00:32:37.748 Command Id: 0x4 00:32:37.748 Phase Bit: 0 00:32:37.748 Status Code: 0x2 00:32:37.748 Status Code Type: 0x0 00:32:37.748 Do Not Retry: 1 00:32:37.748 Error Location: 0x28 00:32:37.748 LBA: 0x0 00:32:37.748 Namespace: 0x0 00:32:37.749 Vendor Log Page: 0x0 00:32:37.749 00:32:37.749 Number of Queues 00:32:37.749 ================ 00:32:37.749 Number of I/O Submission Queues: 128 00:32:37.749 Number of I/O Completion Queues: 128 00:32:37.749 00:32:37.749 ZNS Specific Controller Data 00:32:37.749 ============================ 00:32:37.749 Zone Append Size Limit: 0 00:32:37.749 00:32:37.749 00:32:37.749 Active Namespaces 00:32:37.749 ================= 00:32:37.749 get_feature(0x05) failed 00:32:37.749 Namespace ID:1 00:32:37.749 Command Set Identifier: NVM (00h) 00:32:37.749 Deallocate: Supported 00:32:37.749 Deallocated/Unwritten Error: Not Supported 00:32:37.749 Deallocated Read Value: Unknown 00:32:37.749 Deallocate in Write Zeroes: Not Supported 00:32:37.749 Deallocated Guard Field: 0xFFFF 00:32:37.749 Flush: Supported 00:32:37.749 Reservation: Not Supported 00:32:37.749 Namespace Sharing Capabilities: Multiple Controllers 00:32:37.749 Size (in LBAs): 1953525168 (931GiB) 00:32:37.749 Capacity (in LBAs): 1953525168 (931GiB) 00:32:37.749 Utilization (in LBAs): 1953525168 (931GiB) 00:32:37.749 UUID: 90464a00-67dc-423a-980f-fe20a00b4f7a 00:32:37.749 Thin Provisioning: Not Supported 00:32:37.749 Per-NS Atomic Units: Yes 00:32:37.749 Atomic Boundary Size (Normal): 0 00:32:37.749 Atomic Boundary Size (PFail): 0 00:32:37.749 Atomic Boundary Offset: 0 00:32:37.749 NGUID/EUI64 Never Reused: No 00:32:37.749 ANA group ID: 1 00:32:37.749 Namespace Write Protected: No 00:32:37.749 Number of LBA Formats: 1 00:32:37.749 Current LBA Format: LBA Format #00 00:32:37.749 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:37.749 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:37.749 rmmod nvme_tcp 00:32:37.749 rmmod nvme_fabrics 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:37.749 20:38:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.280 20:38:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:40.280 20:38:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:40.280 20:38:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:40.280 20:38:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:40.280 20:38:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:40.280 20:38:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:40.280 20:38:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:40.280 20:38:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:40.280 20:38:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:40.280 20:38:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:40.280 20:38:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:41.211 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:41.211 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:41.211 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:41.211 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:41.211 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:41.211 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:41.211 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:41.211 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:41.211 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:41.211 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:41.211 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:41.211 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:41.211 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:41.211 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:41.211 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:41.211 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:42.145 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:42.145 00:32:42.145 real 0m9.597s 00:32:42.145 user 0m2.008s 00:32:42.145 sys 0m3.517s 00:32:42.145 20:38:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:42.145 20:38:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:42.145 ************************************ 00:32:42.145 END TEST nvmf_identify_kernel_target 00:32:42.401 ************************************ 00:32:42.401 20:38:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:42.401 20:38:20 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:42.401 20:38:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:42.401 20:38:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:42.401 20:38:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:42.401 ************************************ 00:32:42.401 START TEST nvmf_auth_host 00:32:42.401 ************************************ 00:32:42.401 20:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:42.401 * Looking for test storage... 00:32:42.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:42.401 20:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:42.401 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:42.401 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:42.401 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:42.401 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:42.401 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:42.402 20:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:44.301 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:44.302 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:44.302 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:44.302 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:44.302 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:44.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:44.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:32:44.302 00:32:44.302 --- 10.0.0.2 ping statistics --- 00:32:44.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.302 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:44.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:44.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:32:44.302 00:32:44.302 --- 10.0.0.1 ping statistics --- 00:32:44.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.302 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=4189974 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 4189974 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 4189974 ']' 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:44.302 20:38:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.560 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:44.560 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:44.560 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:44.560 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:44.560 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=60e6d12f5f00fa63c56db1523e3900b4 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zfj 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 60e6d12f5f00fa63c56db1523e3900b4 0 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 60e6d12f5f00fa63c56db1523e3900b4 0 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=60e6d12f5f00fa63c56db1523e3900b4 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zfj 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zfj 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.zfj 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9c4d0368665f816f1bcd760cb087527547c2f1377f751a04155dc61ad575952b 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.u9t 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9c4d0368665f816f1bcd760cb087527547c2f1377f751a04155dc61ad575952b 3 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9c4d0368665f816f1bcd760cb087527547c2f1377f751a04155dc61ad575952b 3 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9c4d0368665f816f1bcd760cb087527547c2f1377f751a04155dc61ad575952b 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.u9t 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.u9t 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.u9t 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6bfce136cc7e90f6e88483bee69cc15f8b889b12a0575da5 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Tf2 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6bfce136cc7e90f6e88483bee69cc15f8b889b12a0575da5 0 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6bfce136cc7e90f6e88483bee69cc15f8b889b12a0575da5 0 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6bfce136cc7e90f6e88483bee69cc15f8b889b12a0575da5 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Tf2 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Tf2 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Tf2 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9c850a2171bd7b2ce22a140017b17fdbc91291225c15bdea 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.o69 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9c850a2171bd7b2ce22a140017b17fdbc91291225c15bdea 2 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9c850a2171bd7b2ce22a140017b17fdbc91291225c15bdea 2 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9c850a2171bd7b2ce22a140017b17fdbc91291225c15bdea 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.o69 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.o69 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.o69 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:44.819 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:44.820 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:44.820 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aa3b614538382eb028bb2796b17914ce 00:32:44.820 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:44.820 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.KPz 00:32:44.820 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aa3b614538382eb028bb2796b17914ce 1 00:32:44.820 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aa3b614538382eb028bb2796b17914ce 1 00:32:44.820 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.820 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:44.820 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aa3b614538382eb028bb2796b17914ce 00:32:44.820 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:44.820 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.KPz 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.KPz 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.KPz 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4b79ba09d099753e9c86b43199aef137 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VAK 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4b79ba09d099753e9c86b43199aef137 1 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4b79ba09d099753e9c86b43199aef137 1 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4b79ba09d099753e9c86b43199aef137 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VAK 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VAK 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.VAK 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5aa712464d6353ff1ca89889e8804d9bed2bc558fc40933d 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.UCy 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5aa712464d6353ff1ca89889e8804d9bed2bc558fc40933d 2 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5aa712464d6353ff1ca89889e8804d9bed2bc558fc40933d 2 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5aa712464d6353ff1ca89889e8804d9bed2bc558fc40933d 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.UCy 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.UCy 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.UCy 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=defd13eb95431d42b2d48ba2aecd89af 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.S9P 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key defd13eb95431d42b2d48ba2aecd89af 0 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 defd13eb95431d42b2d48ba2aecd89af 0 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=defd13eb95431d42b2d48ba2aecd89af 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.S9P 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.S9P 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.S9P 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=05a6037b53c8b0afc8981f7e74971c716b14c819cb996494318aee64cd30930b 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Qfk 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 05a6037b53c8b0afc8981f7e74971c716b14c819cb996494318aee64cd30930b 3 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 05a6037b53c8b0afc8981f7e74971c716b14c819cb996494318aee64cd30930b 3 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=05a6037b53c8b0afc8981f7e74971c716b14c819cb996494318aee64cd30930b 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:45.086 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Qfk 00:32:45.087 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Qfk 00:32:45.087 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Qfk 00:32:45.087 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:45.087 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4189974 00:32:45.087 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 4189974 ']' 00:32:45.087 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.087 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:45.087 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.087 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:45.087 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zfj 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.u9t ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.u9t 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Tf2 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.o69 ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.o69 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.KPz 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.VAK ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VAK 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.UCy 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.S9P ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.S9P 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Qfk 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:45.375 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:45.633 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:45.633 20:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:46.566 Waiting for block devices as requested 00:32:46.566 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:46.824 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:46.824 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:47.100 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:47.100 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:47.100 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:47.100 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:47.100 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:47.356 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:47.357 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:47.357 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:47.357 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:47.613 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:47.613 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:47.613 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:47.613 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:47.613 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:48.179 No valid GPT data, bailing 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:48.179 00:32:48.179 Discovery Log Number of Records 2, Generation counter 2 00:32:48.179 =====Discovery Log Entry 0====== 00:32:48.179 trtype: tcp 00:32:48.179 adrfam: ipv4 00:32:48.179 subtype: current discovery subsystem 00:32:48.179 treq: not specified, sq flow control disable supported 00:32:48.179 portid: 1 00:32:48.179 trsvcid: 4420 00:32:48.179 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:48.179 traddr: 10.0.0.1 00:32:48.179 eflags: none 00:32:48.179 sectype: none 00:32:48.179 =====Discovery Log Entry 1====== 00:32:48.179 trtype: tcp 00:32:48.179 adrfam: ipv4 00:32:48.179 subtype: nvme subsystem 00:32:48.179 treq: not specified, sq flow control disable supported 00:32:48.179 portid: 1 00:32:48.179 trsvcid: 4420 00:32:48.179 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:48.179 traddr: 10.0.0.1 00:32:48.179 eflags: none 00:32:48.179 sectype: none 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.179 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.437 nvme0n1 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.437 20:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.694 nvme0n1 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.694 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.974 nvme0n1 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.974 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.975 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.232 nvme0n1 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.232 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.233 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.490 nvme0n1 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.490 nvme0n1 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.490 20:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.748 nvme0n1 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.748 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.005 nvme0n1 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.005 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.263 nvme0n1 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.263 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.521 20:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.521 nvme0n1 00:32:50.521 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.521 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.521 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.521 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.521 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.521 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.779 nvme0n1 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.779 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.038 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.297 nvme0n1 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.297 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.579 nvme0n1 00:32:51.579 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.579 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.579 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.579 20:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.579 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.579 20:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.579 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.837 nvme0n1 00:32:51.837 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.837 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.837 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.837 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.837 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.837 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.837 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.837 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.837 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.837 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.095 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.095 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.095 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:52.095 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.095 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.095 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:52.095 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.095 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:32:52.095 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:32:52.095 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.095 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.096 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.354 nvme0n1 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.354 20:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.613 nvme0n1 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.613 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.178 nvme0n1 00:32:53.178 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.178 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.178 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.178 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.178 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.178 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.178 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.179 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.437 20:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.004 nvme0n1 00:32:54.004 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.004 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.004 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.004 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.004 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.004 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.004 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.004 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.004 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.004 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.005 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.573 nvme0n1 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.573 20:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.143 nvme0n1 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.143 20:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.710 nvme0n1 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.710 20:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.663 nvme0n1 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.663 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.664 20:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.603 nvme0n1 00:32:57.603 20:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.603 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.603 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.603 20:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.603 20:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.603 20:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.862 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.862 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.862 20:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.862 20:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.862 20:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.862 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.862 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.863 20:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.832 nvme0n1 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.832 20:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.769 nvme0n1 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.769 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.705 nvme0n1 00:33:00.705 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.705 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.705 20:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.705 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.705 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.705 20:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.705 nvme0n1 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.705 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.964 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.964 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.964 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.964 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.964 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.964 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.964 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:00.964 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.964 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.964 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:00.964 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:00.964 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:00.964 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.965 nvme0n1 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.965 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.225 nvme0n1 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.225 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.484 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.484 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.484 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.484 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:01.484 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.485 nvme0n1 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.485 20:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.743 nvme0n1 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.743 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.001 nvme0n1 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.001 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.002 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.002 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.002 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.002 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:02.002 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.002 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.260 nvme0n1 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.260 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.519 nvme0n1 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.519 20:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.777 nvme0n1 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.777 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.037 nvme0n1 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.037 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.038 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.038 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:03.038 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.038 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.295 nvme0n1 00:33:03.295 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.295 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.295 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.295 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.295 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.295 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.555 20:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.814 nvme0n1 00:33:03.814 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.814 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.814 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.814 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.814 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.814 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.814 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.815 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.074 nvme0n1 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.074 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.075 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.075 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.075 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.333 nvme0n1 00:33:04.333 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.333 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.333 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.333 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.333 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.333 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.592 20:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.853 nvme0n1 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.853 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.854 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.420 nvme0n1 00:33:05.420 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.420 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.420 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.420 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.420 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.420 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.420 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.420 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.420 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.420 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.421 20:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.989 nvme0n1 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.989 20:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.553 nvme0n1 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.553 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.116 nvme0n1 00:33:07.116 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.116 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.116 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.116 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.116 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.116 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.374 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.374 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.374 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.374 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.374 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.374 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.374 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.375 20:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.945 nvme0n1 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.945 20:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.946 20:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.946 20:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.946 20:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.946 20:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.946 20:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.946 20:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.946 20:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.946 20:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.946 20:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:07.946 20:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.946 20:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.884 nvme0n1 00:33:08.884 20:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.885 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.885 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.885 20:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.885 20:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.885 20:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.885 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.885 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.885 20:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.885 20:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.885 20:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.885 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.885 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:08.885 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.907 20:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.847 nvme0n1 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:09.847 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.848 20:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.108 20:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.108 20:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.108 20:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.108 20:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.108 20:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.108 20:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:10.108 20:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.108 20:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.044 nvme0n1 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:11.044 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.045 20:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.981 nvme0n1 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.981 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.982 20:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.970 nvme0n1 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.970 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.229 nvme0n1 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.229 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.487 nvme0n1 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.487 20:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.748 nvme0n1 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.748 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.008 nvme0n1 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.008 nvme0n1 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.008 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.268 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.527 nvme0n1 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.527 20:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.528 20:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:14.528 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.528 20:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.787 nvme0n1 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.787 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.047 nvme0n1 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.047 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.306 nvme0n1 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.306 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.566 nvme0n1 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.566 20:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.826 nvme0n1 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.826 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.085 nvme0n1 00:33:16.085 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.085 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.085 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.085 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.085 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.085 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.345 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.605 nvme0n1 00:33:16.605 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.605 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.605 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.605 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.605 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.605 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.605 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.605 20:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.605 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.605 20:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.605 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.866 nvme0n1 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.866 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.432 nvme0n1 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.432 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.433 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.433 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.433 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.433 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.433 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.433 20:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.433 20:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:17.433 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.433 20:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.998 nvme0n1 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.998 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.567 nvme0n1 00:33:18.567 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.567 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.568 20:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.137 nvme0n1 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:19.137 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.138 20:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.707 nvme0n1 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:19.707 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.708 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.276 nvme0n1 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBlNmQxMmY1ZjAwZmE2M2M1NmRiMTUyM2UzOTAwYjQzJDAb: 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: ]] 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWM0ZDAzNjg2NjVmODE2ZjFiY2Q3NjBjYjA4NzUyNzU0N2MyZjEzNzdmNzUxYTA0MTU1ZGM2MWFkNTc1OTUyYi1nIJc=: 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.276 20:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.213 nvme0n1 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.213 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.214 20:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.582 nvme0n1 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzYjYxNDUzODM4MmViMDI4YmIyNzk2YjE3OTE0Y2VkM9/9: 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: ]] 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGI3OWJhMDlkMDk5NzUzZTljODZiNDMxOTlhZWYxMzeAeFY+: 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.582 20:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.583 20:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.583 20:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.583 20:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.583 20:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.583 20:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:22.583 20:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.583 20:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.515 nvme0n1 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWFhNzEyNDY0ZDYzNTNmZjFjYTg5ODg5ZTg4MDRkOWJlZDJiYzU1OGZjNDA5MzNk68RkuA==: 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: ]] 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVmZDEzZWI5NTQzMWQ0MmIyZDQ4YmEyYWVjZDg5YWY3GY2o: 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.516 20:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.449 nvme0n1 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDVhNjAzN2I1M2M4YjBhZmM4OTgxZjdlNzQ5NzFjNzE2YjE0YzgxOWNiOTk2NDk0MzE4YWVlNjRjZDMwOTMwYsNtzGE=: 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.449 20:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.450 20:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.382 nvme0n1 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmJmY2UxMzZjYzdlOTBmNmU4ODQ4M2JlZTY5Y2MxNWY4Yjg4OWIxMmEwNTc1ZGE1t+gTGg==: 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: ]] 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4NTBhMjE3MWJkN2IyY2UyMmExNDAwMTdiMTdmZGJjOTEyOTEyMjVjMTViZGVhvURIKw==: 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.382 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.641 request: 00:33:25.641 { 00:33:25.641 "name": "nvme0", 00:33:25.641 "trtype": "tcp", 00:33:25.641 "traddr": "10.0.0.1", 00:33:25.641 "adrfam": "ipv4", 00:33:25.641 "trsvcid": "4420", 00:33:25.641 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:25.641 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:25.641 "prchk_reftag": false, 00:33:25.641 "prchk_guard": false, 00:33:25.641 "hdgst": false, 00:33:25.641 "ddgst": false, 00:33:25.641 "method": "bdev_nvme_attach_controller", 00:33:25.641 "req_id": 1 00:33:25.641 } 00:33:25.641 Got JSON-RPC error response 00:33:25.641 response: 00:33:25.641 { 00:33:25.641 "code": -5, 00:33:25.641 "message": "Input/output error" 00:33:25.641 } 00:33:25.641 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:25.641 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:25.641 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:25.641 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.642 20:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.642 request: 00:33:25.642 { 00:33:25.642 "name": "nvme0", 00:33:25.642 "trtype": "tcp", 00:33:25.642 "traddr": "10.0.0.1", 00:33:25.642 "adrfam": "ipv4", 00:33:25.642 "trsvcid": "4420", 00:33:25.642 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:25.642 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:25.642 "prchk_reftag": false, 00:33:25.642 "prchk_guard": false, 00:33:25.642 "hdgst": false, 00:33:25.642 "ddgst": false, 00:33:25.642 "dhchap_key": "key2", 00:33:25.642 "method": "bdev_nvme_attach_controller", 00:33:25.642 "req_id": 1 00:33:25.642 } 00:33:25.642 Got JSON-RPC error response 00:33:25.642 response: 00:33:25.642 { 00:33:25.642 "code": -5, 00:33:25.642 "message": "Input/output error" 00:33:25.642 } 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.642 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.900 request: 00:33:25.900 { 00:33:25.900 "name": "nvme0", 00:33:25.900 "trtype": "tcp", 00:33:25.900 "traddr": "10.0.0.1", 00:33:25.900 "adrfam": "ipv4", 00:33:25.900 "trsvcid": "4420", 00:33:25.900 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:25.900 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:25.900 "prchk_reftag": false, 00:33:25.900 "prchk_guard": false, 00:33:25.900 "hdgst": false, 00:33:25.900 "ddgst": false, 00:33:25.900 "dhchap_key": "key1", 00:33:25.900 "dhchap_ctrlr_key": "ckey2", 00:33:25.900 "method": "bdev_nvme_attach_controller", 00:33:25.900 "req_id": 1 00:33:25.900 } 00:33:25.900 Got JSON-RPC error response 00:33:25.900 response: 00:33:25.900 { 00:33:25.900 "code": -5, 00:33:25.900 "message": "Input/output error" 00:33:25.900 } 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:25.900 rmmod nvme_tcp 00:33:25.900 rmmod nvme_fabrics 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 4189974 ']' 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 4189974 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 4189974 ']' 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 4189974 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4189974 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4189974' 00:33:25.900 killing process with pid 4189974 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 4189974 00:33:25.900 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 4189974 00:33:26.161 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:26.161 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:26.161 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:26.161 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:26.161 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:26.161 20:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.161 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:26.161 20:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.079 20:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:28.079 20:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:28.079 20:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:28.079 20:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:28.079 20:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:28.079 20:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:28.079 20:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:28.079 20:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:28.079 20:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:28.079 20:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:28.079 20:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:28.080 20:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:28.080 20:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:29.455 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:29.455 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:29.455 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:29.455 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:29.455 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:29.455 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:29.455 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:29.455 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:29.455 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:29.455 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:29.455 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:29.455 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:29.455 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:29.455 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:29.455 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:29.455 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:30.390 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:30.390 20:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.zfj /tmp/spdk.key-null.Tf2 /tmp/spdk.key-sha256.KPz /tmp/spdk.key-sha384.UCy /tmp/spdk.key-sha512.Qfk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:30.390 20:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:31.324 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:31.324 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:31.324 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:31.324 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:31.583 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:31.583 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:31.583 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:31.583 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:31.583 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:31.583 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:31.583 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:31.583 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:31.583 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:31.583 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:31.583 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:31.583 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:31.583 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:31.583 00:33:31.583 real 0m49.328s 00:33:31.583 user 0m47.263s 00:33:31.583 sys 0m5.489s 00:33:31.583 20:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:31.583 20:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.583 ************************************ 00:33:31.583 END TEST nvmf_auth_host 00:33:31.583 ************************************ 00:33:31.583 20:39:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:31.583 20:39:10 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:31.583 20:39:10 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:31.583 20:39:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:31.583 20:39:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:31.583 20:39:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:31.584 ************************************ 00:33:31.584 START TEST nvmf_digest 00:33:31.584 ************************************ 00:33:31.584 20:39:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:31.841 * Looking for test storage... 00:33:31.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.841 20:39:10 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:31.842 20:39:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:33.742 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:33.742 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:33.742 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:33.742 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:33.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:33.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:33:33.742 00:33:33.742 --- 10.0.0.2 ping statistics --- 00:33:33.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.742 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:33.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:33.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:33:33.742 00:33:33.742 --- 10.0.0.1 ping statistics --- 00:33:33.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.742 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:33.742 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:33.743 20:39:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:33.743 20:39:12 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:33.743 20:39:12 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:33.743 20:39:12 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:33.743 20:39:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:33.743 20:39:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:33.743 20:39:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:34.001 ************************************ 00:33:34.001 START TEST nvmf_digest_clean 00:33:34.001 ************************************ 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=6393 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 6393 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 6393 ']' 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:34.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:34.001 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.001 [2024-07-15 20:39:12.325561] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:33:34.001 [2024-07-15 20:39:12.325645] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:34.001 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.001 [2024-07-15 20:39:12.389051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.001 [2024-07-15 20:39:12.475503] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:34.001 [2024-07-15 20:39:12.475562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:34.001 [2024-07-15 20:39:12.475575] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:34.001 [2024-07-15 20:39:12.475586] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:34.001 [2024-07-15 20:39:12.475595] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:34.001 [2024-07-15 20:39:12.475622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.259 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:34.259 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:34.259 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:34.259 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:34.259 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.259 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.260 null0 00:33:34.260 [2024-07-15 20:39:12.678777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:34.260 [2024-07-15 20:39:12.703002] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=6420 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 6420 /var/tmp/bperf.sock 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 6420 ']' 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:34.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:34.260 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.260 [2024-07-15 20:39:12.749421] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:33:34.260 [2024-07-15 20:39:12.749484] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid6420 ] 00:33:34.260 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.517 [2024-07-15 20:39:12.810403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.517 [2024-07-15 20:39:12.895853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.517 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:34.517 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:34.517 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:34.517 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:34.517 20:39:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:35.083 20:39:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.083 20:39:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.340 nvme0n1 00:33:35.340 20:39:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:35.340 20:39:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:35.340 Running I/O for 2 seconds... 00:33:37.862 00:33:37.862 Latency(us) 00:33:37.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.862 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:37.862 nvme0n1 : 2.01 18358.47 71.71 0.00 0.00 6962.31 3762.25 16311.18 00:33:37.862 =================================================================================================================== 00:33:37.862 Total : 18358.47 71.71 0.00 0.00 6962.31 3762.25 16311.18 00:33:37.862 0 00:33:37.862 20:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:37.863 20:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:37.863 20:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:37.863 20:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:37.863 20:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:37.863 | select(.opcode=="crc32c") 00:33:37.863 | "\(.module_name) \(.executed)"' 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 6420 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 6420 ']' 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 6420 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 6420 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 6420' 00:33:37.863 killing process with pid 6420 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 6420 00:33:37.863 Received shutdown signal, test time was about 2.000000 seconds 00:33:37.863 00:33:37.863 Latency(us) 00:33:37.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.863 =================================================================================================================== 00:33:37.863 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 6420 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=6833 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 6833 /var/tmp/bperf.sock 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 6833 ']' 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:37.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:37.863 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:38.120 [2024-07-15 20:39:16.410831] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:33:38.120 [2024-07-15 20:39:16.410929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid6833 ] 00:33:38.120 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:38.120 Zero copy mechanism will not be used. 00:33:38.120 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.120 [2024-07-15 20:39:16.472299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.120 [2024-07-15 20:39:16.560529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.120 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:38.120 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:38.120 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:38.120 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:38.120 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:38.683 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.683 20:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.939 nvme0n1 00:33:38.939 20:39:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:38.939 20:39:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:38.939 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:38.939 Zero copy mechanism will not be used. 00:33:38.939 Running I/O for 2 seconds... 00:33:41.460 00:33:41.460 Latency(us) 00:33:41.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.460 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:41.460 nvme0n1 : 2.00 2817.65 352.21 0.00 0.00 5674.00 5412.79 12815.93 00:33:41.460 =================================================================================================================== 00:33:41.460 Total : 2817.65 352.21 0.00 0.00 5674.00 5412.79 12815.93 00:33:41.460 0 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:41.460 | select(.opcode=="crc32c") 00:33:41.460 | "\(.module_name) \(.executed)"' 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 6833 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 6833 ']' 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 6833 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 6833 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 6833' 00:33:41.460 killing process with pid 6833 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 6833 00:33:41.460 Received shutdown signal, test time was about 2.000000 seconds 00:33:41.460 00:33:41.460 Latency(us) 00:33:41.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.460 =================================================================================================================== 00:33:41.460 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 6833 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=7236 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 7236 /var/tmp/bperf.sock 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 7236 ']' 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:41.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:41.460 20:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:41.718 [2024-07-15 20:39:20.003598] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:33:41.718 [2024-07-15 20:39:20.003724] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid7236 ] 00:33:41.718 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.718 [2024-07-15 20:39:20.065288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.718 [2024-07-15 20:39:20.153315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.718 20:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:41.718 20:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:41.718 20:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:41.718 20:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:41.718 20:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:42.284 20:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.284 20:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.541 nvme0n1 00:33:42.541 20:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:42.541 20:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:42.541 Running I/O for 2 seconds... 00:33:45.101 00:33:45.101 Latency(us) 00:33:45.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.101 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:45.101 nvme0n1 : 2.00 20448.85 79.88 0.00 0.00 6249.68 3495.25 11505.21 00:33:45.101 =================================================================================================================== 00:33:45.101 Total : 20448.85 79.88 0.00 0.00 6249.68 3495.25 11505.21 00:33:45.101 0 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:45.101 | select(.opcode=="crc32c") 00:33:45.101 | "\(.module_name) \(.executed)"' 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 7236 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 7236 ']' 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 7236 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 7236 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 7236' 00:33:45.101 killing process with pid 7236 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 7236 00:33:45.101 Received shutdown signal, test time was about 2.000000 seconds 00:33:45.101 00:33:45.101 Latency(us) 00:33:45.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.101 =================================================================================================================== 00:33:45.101 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 7236 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=7676 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 7676 /var/tmp/bperf.sock 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 7676 ']' 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:45.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:45.101 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:45.101 [2024-07-15 20:39:23.575779] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:33:45.101 [2024-07-15 20:39:23.575906] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid7676 ] 00:33:45.101 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:45.101 Zero copy mechanism will not be used. 00:33:45.101 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.360 [2024-07-15 20:39:23.639787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.360 [2024-07-15 20:39:23.725042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.360 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:45.360 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:45.360 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:45.360 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:45.360 20:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:45.618 20:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.618 20:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:46.183 nvme0n1 00:33:46.183 20:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:46.183 20:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:46.183 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:46.183 Zero copy mechanism will not be used. 00:33:46.183 Running I/O for 2 seconds... 00:33:48.711 00:33:48.711 Latency(us) 00:33:48.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.711 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:48.711 nvme0n1 : 2.01 1845.87 230.73 0.00 0.00 8644.80 6844.87 16699.54 00:33:48.711 =================================================================================================================== 00:33:48.711 Total : 1845.87 230.73 0.00 0.00 8644.80 6844.87 16699.54 00:33:48.711 0 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:48.711 | select(.opcode=="crc32c") 00:33:48.711 | "\(.module_name) \(.executed)"' 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 7676 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 7676 ']' 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 7676 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:48.711 20:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 7676 00:33:48.711 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:48.711 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:48.711 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 7676' 00:33:48.711 killing process with pid 7676 00:33:48.711 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 7676 00:33:48.711 Received shutdown signal, test time was about 2.000000 seconds 00:33:48.711 00:33:48.711 Latency(us) 00:33:48.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.711 =================================================================================================================== 00:33:48.711 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:48.711 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 7676 00:33:48.968 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 6393 00:33:48.968 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 6393 ']' 00:33:48.968 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 6393 00:33:48.968 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:48.968 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:48.968 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 6393 00:33:48.968 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:48.968 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:48.968 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 6393' 00:33:48.968 killing process with pid 6393 00:33:48.968 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 6393 00:33:48.968 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 6393 00:33:49.226 00:33:49.226 real 0m15.251s 00:33:49.226 user 0m30.635s 00:33:49.226 sys 0m3.850s 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:49.226 ************************************ 00:33:49.226 END TEST nvmf_digest_clean 00:33:49.226 ************************************ 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:49.226 ************************************ 00:33:49.226 START TEST nvmf_digest_error 00:33:49.226 ************************************ 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=8196 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 8196 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 8196 ']' 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:49.226 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.226 [2024-07-15 20:39:27.629616] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:33:49.226 [2024-07-15 20:39:27.629697] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.226 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.226 [2024-07-15 20:39:27.692763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.485 [2024-07-15 20:39:27.779639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:49.485 [2024-07-15 20:39:27.779693] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:49.485 [2024-07-15 20:39:27.779706] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:49.485 [2024-07-15 20:39:27.779726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:49.485 [2024-07-15 20:39:27.779736] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:49.485 [2024-07-15 20:39:27.779763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.485 [2024-07-15 20:39:27.868391] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.485 20:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.485 null0 00:33:49.485 [2024-07-15 20:39:27.987237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:49.485 [2024-07-15 20:39:28.011484] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=8225 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 8225 /var/tmp/bperf.sock 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 8225 ']' 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:49.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:49.743 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.743 [2024-07-15 20:39:28.057501] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:33:49.743 [2024-07-15 20:39:28.057593] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid8225 ] 00:33:49.743 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.743 [2024-07-15 20:39:28.123690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.743 [2024-07-15 20:39:28.222538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.001 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:50.001 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:50.001 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:50.001 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:50.258 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:50.258 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.258 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:50.258 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.258 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:50.258 20:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:50.516 nvme0n1 00:33:50.516 20:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:50.516 20:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.516 20:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:50.516 20:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.516 20:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:50.516 20:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:50.773 Running I/O for 2 seconds... 00:33:50.773 [2024-07-15 20:39:29.139369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:50.773 [2024-07-15 20:39:29.139422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.774 [2024-07-15 20:39:29.139445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.774 [2024-07-15 20:39:29.154185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:50.774 [2024-07-15 20:39:29.154238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.774 [2024-07-15 20:39:29.154257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.774 [2024-07-15 20:39:29.166765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:50.774 [2024-07-15 20:39:29.166800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.774 [2024-07-15 20:39:29.166820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.774 [2024-07-15 20:39:29.180575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:50.774 [2024-07-15 20:39:29.180622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.774 [2024-07-15 20:39:29.180643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.774 [2024-07-15 20:39:29.194892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:50.774 [2024-07-15 20:39:29.194927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.774 [2024-07-15 20:39:29.194961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.774 [2024-07-15 20:39:29.208185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:50.774 [2024-07-15 20:39:29.208235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.774 [2024-07-15 20:39:29.208253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.774 [2024-07-15 20:39:29.222067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:50.774 [2024-07-15 20:39:29.222099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.774 [2024-07-15 20:39:29.222115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.774 [2024-07-15 20:39:29.235796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:50.774 [2024-07-15 20:39:29.235828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.774 [2024-07-15 20:39:29.235845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.774 [2024-07-15 20:39:29.248515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:50.774 [2024-07-15 20:39:29.248550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.774 [2024-07-15 20:39:29.248568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.774 [2024-07-15 20:39:29.261241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:50.774 [2024-07-15 20:39:29.261275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.774 [2024-07-15 20:39:29.261294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.774 [2024-07-15 20:39:29.275101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:50.774 [2024-07-15 20:39:29.275133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.774 [2024-07-15 20:39:29.275150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.774 [2024-07-15 20:39:29.289218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:50.774 [2024-07-15 20:39:29.289253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.774 [2024-07-15 20:39:29.289272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.774 [2024-07-15 20:39:29.302604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:50.774 [2024-07-15 20:39:29.302639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.774 [2024-07-15 20:39:29.302657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.317167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.317201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.317220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.328733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.328775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.328794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.344712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.344746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.344764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.358071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.358103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.358119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.370690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.370725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.370744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.384213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.384260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.384278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.397292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.397326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.397344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.411182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.411223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.411244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.423362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.423397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.423416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.436948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.436977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.436993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.450899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.450946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.450963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.464852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.464892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.464928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.478809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.478844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.478863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.492239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.492272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.492291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.504655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.504699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.504718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.518590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.518625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.518644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.532377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.532411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.532429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.546603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.546637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.546656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.032 [2024-07-15 20:39:29.558477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.032 [2024-07-15 20:39:29.558513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.032 [2024-07-15 20:39:29.558533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.573776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.573811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.573830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.588029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.588061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.588078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.600769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.600803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.600822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.616213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.616248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.616267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.628145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.628176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.628207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.644097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.644129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.644153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.656483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.656517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.656536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.670001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.670033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.670050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.685617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.685652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.685671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.698273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.698307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.698326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.712923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.712952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.712968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.729941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.729973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.730005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.745364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.745399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.745419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.290 [2024-07-15 20:39:29.756898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.290 [2024-07-15 20:39:29.756944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.290 [2024-07-15 20:39:29.756961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.291 [2024-07-15 20:39:29.772985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.291 [2024-07-15 20:39:29.773021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.291 [2024-07-15 20:39:29.773039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.291 [2024-07-15 20:39:29.788509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.291 [2024-07-15 20:39:29.788544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.291 [2024-07-15 20:39:29.788563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.291 [2024-07-15 20:39:29.800464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.291 [2024-07-15 20:39:29.800498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.291 [2024-07-15 20:39:29.800517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.291 [2024-07-15 20:39:29.814924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.291 [2024-07-15 20:39:29.814955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.291 [2024-07-15 20:39:29.814972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:29.827747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:29.827781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:29.827800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:29.841076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:29.841120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:29.841136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:29.856884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:29.856932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:29.856949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:29.869049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:29.869080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:29.869097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:29.885659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:29.885693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:29.885713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:29.897391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:29.897426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:29.897445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:29.910828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:29.910863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:29.910889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:29.926957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:29.926989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:29.927007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:29.939933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:29.939963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:29.939980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:29.953424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:29.953458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:29.953477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:29.966168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:29.966214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:29.966233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:29.981850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:29.981892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:29.981938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:29.994304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:29.994339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:29.994357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:30.010064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:30.010116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:30.010169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:30.023658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:30.023710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:30.023740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:30.041807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:30.041871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:30.041913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:30.055629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:30.055673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:30.055713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.549 [2024-07-15 20:39:30.074039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.549 [2024-07-15 20:39:30.074081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.549 [2024-07-15 20:39:30.074115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.087199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.087251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.087287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.102126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.102173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.102194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.115233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.115261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.115278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.127937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.127966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.127990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.140322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.140363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.140381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.153144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.153178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.153210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.166075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.166120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.166138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.179001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.179031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.179048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.190517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.190547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.190565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.204179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.204210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.204228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.215268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.215296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.215316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.229409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.229439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.229457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.240693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.240720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.240738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.255171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.255201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.255222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.266953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.266982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.267001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.279629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.279658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.279677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.292752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.292782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.807 [2024-07-15 20:39:30.292815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.807 [2024-07-15 20:39:30.304055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.807 [2024-07-15 20:39:30.304085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.808 [2024-07-15 20:39:30.304103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.808 [2024-07-15 20:39:30.320031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.808 [2024-07-15 20:39:30.320062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.808 [2024-07-15 20:39:30.320081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.808 [2024-07-15 20:39:30.333149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:51.808 [2024-07-15 20:39:30.333180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.808 [2024-07-15 20:39:30.333203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.066 [2024-07-15 20:39:30.345320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.066 [2024-07-15 20:39:30.345351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.066 [2024-07-15 20:39:30.345371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.066 [2024-07-15 20:39:30.360139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.066 [2024-07-15 20:39:30.360179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.066 [2024-07-15 20:39:30.360215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.066 [2024-07-15 20:39:30.375643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.066 [2024-07-15 20:39:30.375673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.066 [2024-07-15 20:39:30.375692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.066 [2024-07-15 20:39:30.388091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.066 [2024-07-15 20:39:30.388123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.066 [2024-07-15 20:39:30.388140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.066 [2024-07-15 20:39:30.402427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.066 [2024-07-15 20:39:30.402462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.066 [2024-07-15 20:39:30.402477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.066 [2024-07-15 20:39:30.415472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.066 [2024-07-15 20:39:30.415522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.066 [2024-07-15 20:39:30.415540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.066 [2024-07-15 20:39:30.429003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.066 [2024-07-15 20:39:30.429035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.066 [2024-07-15 20:39:30.429052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.066 [2024-07-15 20:39:30.443544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.066 [2024-07-15 20:39:30.443582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.066 [2024-07-15 20:39:30.443599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.066 [2024-07-15 20:39:30.455521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.066 [2024-07-15 20:39:30.455561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.066 [2024-07-15 20:39:30.455578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.066 [2024-07-15 20:39:30.467504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.066 [2024-07-15 20:39:30.467534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.066 [2024-07-15 20:39:30.467556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.066 [2024-07-15 20:39:30.480452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.066 [2024-07-15 20:39:30.480498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.066 [2024-07-15 20:39:30.480515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.066 [2024-07-15 20:39:30.493865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.066 [2024-07-15 20:39:30.493902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.066 [2024-07-15 20:39:30.493920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.066 [2024-07-15 20:39:30.506177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.066 [2024-07-15 20:39:30.506209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.067 [2024-07-15 20:39:30.506226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.067 [2024-07-15 20:39:30.517244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.067 [2024-07-15 20:39:30.517271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.067 [2024-07-15 20:39:30.517292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.067 [2024-07-15 20:39:30.530744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.067 [2024-07-15 20:39:30.530775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.067 [2024-07-15 20:39:30.530795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.067 [2024-07-15 20:39:30.543408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.067 [2024-07-15 20:39:30.543437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.067 [2024-07-15 20:39:30.543455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.067 [2024-07-15 20:39:30.555494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.067 [2024-07-15 20:39:30.555524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.067 [2024-07-15 20:39:30.555541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.067 [2024-07-15 20:39:30.566822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.067 [2024-07-15 20:39:30.566848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.067 [2024-07-15 20:39:30.566865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.067 [2024-07-15 20:39:30.580701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.067 [2024-07-15 20:39:30.580731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.067 [2024-07-15 20:39:30.580758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.325 [2024-07-15 20:39:30.596114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.325 [2024-07-15 20:39:30.596145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.325 [2024-07-15 20:39:30.596162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.325 [2024-07-15 20:39:30.606369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.325 [2024-07-15 20:39:30.606399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.325 [2024-07-15 20:39:30.606428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.325 [2024-07-15 20:39:30.620449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.325 [2024-07-15 20:39:30.620480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.325 [2024-07-15 20:39:30.620496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.325 [2024-07-15 20:39:30.633021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.325 [2024-07-15 20:39:30.633050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.325 [2024-07-15 20:39:30.633068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.325 [2024-07-15 20:39:30.645181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.325 [2024-07-15 20:39:30.645211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.325 [2024-07-15 20:39:30.645232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.325 [2024-07-15 20:39:30.658409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.325 [2024-07-15 20:39:30.658440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.325 [2024-07-15 20:39:30.658457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.325 [2024-07-15 20:39:30.669361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.325 [2024-07-15 20:39:30.669390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.325 [2024-07-15 20:39:30.669409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.325 [2024-07-15 20:39:30.682826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.325 [2024-07-15 20:39:30.682856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.325 [2024-07-15 20:39:30.682897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.325 [2024-07-15 20:39:30.696779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.325 [2024-07-15 20:39:30.696818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.325 [2024-07-15 20:39:30.696837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.325 [2024-07-15 20:39:30.707258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.325 [2024-07-15 20:39:30.707288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.325 [2024-07-15 20:39:30.707307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.325 [2024-07-15 20:39:30.720457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.325 [2024-07-15 20:39:30.720501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.325 [2024-07-15 20:39:30.720521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.325 [2024-07-15 20:39:30.733362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.325 [2024-07-15 20:39:30.733392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.325 [2024-07-15 20:39:30.733408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.326 [2024-07-15 20:39:30.745361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.326 [2024-07-15 20:39:30.745390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.326 [2024-07-15 20:39:30.745410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.326 [2024-07-15 20:39:30.758989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.326 [2024-07-15 20:39:30.759019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.326 [2024-07-15 20:39:30.759037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.326 [2024-07-15 20:39:30.769421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.326 [2024-07-15 20:39:30.769449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.326 [2024-07-15 20:39:30.769467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.326 [2024-07-15 20:39:30.785492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.326 [2024-07-15 20:39:30.785519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.326 [2024-07-15 20:39:30.785536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.326 [2024-07-15 20:39:30.797771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.326 [2024-07-15 20:39:30.797816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.326 [2024-07-15 20:39:30.797840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.326 [2024-07-15 20:39:30.809236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.326 [2024-07-15 20:39:30.809267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.326 [2024-07-15 20:39:30.809291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.326 [2024-07-15 20:39:30.825589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.326 [2024-07-15 20:39:30.825617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.326 [2024-07-15 20:39:30.825633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.326 [2024-07-15 20:39:30.837818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.326 [2024-07-15 20:39:30.837849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.326 [2024-07-15 20:39:30.837866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.326 [2024-07-15 20:39:30.849171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.326 [2024-07-15 20:39:30.849212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.326 [2024-07-15 20:39:30.849232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.584 [2024-07-15 20:39:30.862028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.584 [2024-07-15 20:39:30.862058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.584 [2024-07-15 20:39:30.862078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.584 [2024-07-15 20:39:30.874523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.584 [2024-07-15 20:39:30.874551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.584 [2024-07-15 20:39:30.874570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.584 [2024-07-15 20:39:30.887026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.584 [2024-07-15 20:39:30.887055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.584 [2024-07-15 20:39:30.887073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.584 [2024-07-15 20:39:30.898973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.584 [2024-07-15 20:39:30.899017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.584 [2024-07-15 20:39:30.899036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.584 [2024-07-15 20:39:30.912611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.584 [2024-07-15 20:39:30.912641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.584 [2024-07-15 20:39:30.912671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.584 [2024-07-15 20:39:30.925023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.584 [2024-07-15 20:39:30.925053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.584 [2024-07-15 20:39:30.925069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.584 [2024-07-15 20:39:30.935633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.584 [2024-07-15 20:39:30.935659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.584 [2024-07-15 20:39:30.935674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.584 [2024-07-15 20:39:30.950427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.585 [2024-07-15 20:39:30.950455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.585 [2024-07-15 20:39:30.950475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.585 [2024-07-15 20:39:30.961736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.585 [2024-07-15 20:39:30.961763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.585 [2024-07-15 20:39:30.961781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.585 [2024-07-15 20:39:30.976520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.585 [2024-07-15 20:39:30.976547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.585 [2024-07-15 20:39:30.976566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.585 [2024-07-15 20:39:30.986972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.585 [2024-07-15 20:39:30.987000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.585 [2024-07-15 20:39:30.987019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.585 [2024-07-15 20:39:31.001821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.585 [2024-07-15 20:39:31.001866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.585 [2024-07-15 20:39:31.001893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.585 [2024-07-15 20:39:31.015802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.585 [2024-07-15 20:39:31.015847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.585 [2024-07-15 20:39:31.015866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.585 [2024-07-15 20:39:31.027501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.585 [2024-07-15 20:39:31.027535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.585 [2024-07-15 20:39:31.027558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.585 [2024-07-15 20:39:31.042525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.585 [2024-07-15 20:39:31.042552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.585 [2024-07-15 20:39:31.042569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.585 [2024-07-15 20:39:31.058377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.585 [2024-07-15 20:39:31.058408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.585 [2024-07-15 20:39:31.058439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.585 [2024-07-15 20:39:31.070640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.585 [2024-07-15 20:39:31.070671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.585 [2024-07-15 20:39:31.070688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.585 [2024-07-15 20:39:31.082833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.585 [2024-07-15 20:39:31.082884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.585 [2024-07-15 20:39:31.082903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.585 [2024-07-15 20:39:31.096889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.585 [2024-07-15 20:39:31.096931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.585 [2024-07-15 20:39:31.096948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.585 [2024-07-15 20:39:31.108937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.585 [2024-07-15 20:39:31.108967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.585 [2024-07-15 20:39:31.108985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.843 [2024-07-15 20:39:31.121663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7ef9d0) 00:33:52.843 [2024-07-15 20:39:31.121708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.843 [2024-07-15 20:39:31.121727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.843 00:33:52.843 Latency(us) 00:33:52.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.843 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:52.843 nvme0n1 : 2.00 19037.04 74.36 0.00 0.00 6713.88 3276.80 20194.80 00:33:52.844 =================================================================================================================== 00:33:52.844 Total : 19037.04 74.36 0.00 0.00 6713.88 3276.80 20194.80 00:33:52.844 0 00:33:52.844 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:52.844 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:52.844 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:52.844 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:52.844 | .driver_specific 00:33:52.844 | .nvme_error 00:33:52.844 | .status_code 00:33:52.844 | .command_transient_transport_error' 00:33:53.101 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:33:53.101 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 8225 00:33:53.101 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 8225 ']' 00:33:53.101 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 8225 00:33:53.101 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:53.101 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:53.101 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 8225 00:33:53.101 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:53.101 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:53.101 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 8225' 00:33:53.101 killing process with pid 8225 00:33:53.101 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 8225 00:33:53.101 Received shutdown signal, test time was about 2.000000 seconds 00:33:53.101 00:33:53.101 Latency(us) 00:33:53.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.101 =================================================================================================================== 00:33:53.101 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:53.101 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 8225 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=8747 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 8747 /var/tmp/bperf.sock 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 8747 ']' 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:53.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:53.359 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:53.359 [2024-07-15 20:39:31.684142] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:33:53.359 [2024-07-15 20:39:31.684236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid8747 ] 00:33:53.359 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:53.359 Zero copy mechanism will not be used. 00:33:53.359 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.359 [2024-07-15 20:39:31.745581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.359 [2024-07-15 20:39:31.833546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.616 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:53.616 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:53.616 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:53.617 20:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:53.874 20:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:53.874 20:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.874 20:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:53.874 20:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.874 20:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:53.874 20:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:54.132 nvme0n1 00:33:54.132 20:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:54.132 20:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.132 20:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:54.132 20:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.132 20:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:54.132 20:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:54.132 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:54.132 Zero copy mechanism will not be used. 00:33:54.132 Running I/O for 2 seconds... 00:33:54.389 [2024-07-15 20:39:32.672578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.672631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.672651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.684038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.684071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.684098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.695332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.695363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.695380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.706402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.706433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.706449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.717393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.717423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.717439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.728558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.728588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.728620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.739613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.739657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.739674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.750787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.750817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.750834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.761902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.761933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.761952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.772952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.772982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.772999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.783931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.783968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.783985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.794932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.794965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.794982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.807292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.807324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.807341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.818838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.818891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.818910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.832009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.832043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.832061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.843602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.843633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.843651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.854638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.854684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.854700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.865666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.865696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.865713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.876723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.876753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.876769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.887697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.887728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.887760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.898718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.898747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.898763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.389 [2024-07-15 20:39:32.909818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.389 [2024-07-15 20:39:32.909847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.389 [2024-07-15 20:39:32.909887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:32.921001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:32.921033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:32.921051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:32.932025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:32.932055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:32.932072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:32.943187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:32.943232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:32.943250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:32.954199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:32.954230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:32.954247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:32.965224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:32.965254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:32.965287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:32.976166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:32.976210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:32.976234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:32.987086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:32.987115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:32.987132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:32.997958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:32.997989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:32.998006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:33.009020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:33.009051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:33.009068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:33.020094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:33.020124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:33.020142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:33.031102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:33.031133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:33.031150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:33.042073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:33.042103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:33.042120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:33.052974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:33.053005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:33.053023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.647 [2024-07-15 20:39:33.063865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.647 [2024-07-15 20:39:33.063916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.647 [2024-07-15 20:39:33.063933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.648 [2024-07-15 20:39:33.075000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.648 [2024-07-15 20:39:33.075038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.648 [2024-07-15 20:39:33.075056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.648 [2024-07-15 20:39:33.086238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.648 [2024-07-15 20:39:33.086269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.648 [2024-07-15 20:39:33.086285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.648 [2024-07-15 20:39:33.097224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.648 [2024-07-15 20:39:33.097253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.648 [2024-07-15 20:39:33.097270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.648 [2024-07-15 20:39:33.108150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.648 [2024-07-15 20:39:33.108195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.648 [2024-07-15 20:39:33.108212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.648 [2024-07-15 20:39:33.119190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.648 [2024-07-15 20:39:33.119220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.648 [2024-07-15 20:39:33.119237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.648 [2024-07-15 20:39:33.130140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.648 [2024-07-15 20:39:33.130186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.648 [2024-07-15 20:39:33.130202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.648 [2024-07-15 20:39:33.141370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.648 [2024-07-15 20:39:33.141414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.648 [2024-07-15 20:39:33.141431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.648 [2024-07-15 20:39:33.152630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.648 [2024-07-15 20:39:33.152675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.648 [2024-07-15 20:39:33.152692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.648 [2024-07-15 20:39:33.163619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.648 [2024-07-15 20:39:33.163648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.648 [2024-07-15 20:39:33.163672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.648 [2024-07-15 20:39:33.174647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.648 [2024-07-15 20:39:33.174677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.648 [2024-07-15 20:39:33.174694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.912 [2024-07-15 20:39:33.185653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.912 [2024-07-15 20:39:33.185682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.912 [2024-07-15 20:39:33.185699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.912 [2024-07-15 20:39:33.196729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.912 [2024-07-15 20:39:33.196760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.912 [2024-07-15 20:39:33.196777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.912 [2024-07-15 20:39:33.207898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.912 [2024-07-15 20:39:33.207928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.912 [2024-07-15 20:39:33.207944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.912 [2024-07-15 20:39:33.218933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.912 [2024-07-15 20:39:33.218964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.912 [2024-07-15 20:39:33.218982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.912 [2024-07-15 20:39:33.229926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.912 [2024-07-15 20:39:33.229956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.912 [2024-07-15 20:39:33.229973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.912 [2024-07-15 20:39:33.240905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.912 [2024-07-15 20:39:33.240935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.912 [2024-07-15 20:39:33.240952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.912 [2024-07-15 20:39:33.251853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.912 [2024-07-15 20:39:33.251889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.912 [2024-07-15 20:39:33.251923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.912 [2024-07-15 20:39:33.262853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.912 [2024-07-15 20:39:33.262910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.912 [2024-07-15 20:39:33.262944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.912 [2024-07-15 20:39:33.273959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.912 [2024-07-15 20:39:33.274005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.912 [2024-07-15 20:39:33.274022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.912 [2024-07-15 20:39:33.286015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.913 [2024-07-15 20:39:33.286060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.913 [2024-07-15 20:39:33.286076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.913 [2024-07-15 20:39:33.298352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.913 [2024-07-15 20:39:33.298385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.913 [2024-07-15 20:39:33.298403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.913 [2024-07-15 20:39:33.310424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.913 [2024-07-15 20:39:33.310457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.913 [2024-07-15 20:39:33.310475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.913 [2024-07-15 20:39:33.322895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.913 [2024-07-15 20:39:33.322940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.913 [2024-07-15 20:39:33.322957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.913 [2024-07-15 20:39:33.335117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.913 [2024-07-15 20:39:33.335147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.913 [2024-07-15 20:39:33.335163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.913 [2024-07-15 20:39:33.347491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.913 [2024-07-15 20:39:33.347525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.913 [2024-07-15 20:39:33.347543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.913 [2024-07-15 20:39:33.359734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.913 [2024-07-15 20:39:33.359767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.913 [2024-07-15 20:39:33.359786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.913 [2024-07-15 20:39:33.372086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.913 [2024-07-15 20:39:33.372115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.913 [2024-07-15 20:39:33.372131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.913 [2024-07-15 20:39:33.384676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.913 [2024-07-15 20:39:33.384709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.913 [2024-07-15 20:39:33.384727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.913 [2024-07-15 20:39:33.397060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.913 [2024-07-15 20:39:33.397089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.913 [2024-07-15 20:39:33.397106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.913 [2024-07-15 20:39:33.409363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.913 [2024-07-15 20:39:33.409395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.913 [2024-07-15 20:39:33.409414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.913 [2024-07-15 20:39:33.421685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.913 [2024-07-15 20:39:33.421718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.913 [2024-07-15 20:39:33.421737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.913 [2024-07-15 20:39:33.433953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:54.913 [2024-07-15 20:39:33.433982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.913 [2024-07-15 20:39:33.433999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.168 [2024-07-15 20:39:33.446107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.168 [2024-07-15 20:39:33.446149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.168 [2024-07-15 20:39:33.446166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.168 [2024-07-15 20:39:33.458478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.458512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.458530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.470759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.470791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.470817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.483115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.483144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.483177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.495468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.495501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.495520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.507529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.507561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.507579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.519795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.519827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.519846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.531952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.531981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.531998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.544097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.544140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.544158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.556527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.556561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.556580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.568934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.568963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.568979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.581119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.581167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.581185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.593386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.593419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.593437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.605620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.605653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.605672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.617830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.617862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.617890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.630123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.630152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.630168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.642367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.642403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.642422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.654497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.654533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.654552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.666751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.666786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.666806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.679159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.679205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.679225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.169 [2024-07-15 20:39:33.691341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.169 [2024-07-15 20:39:33.691374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.169 [2024-07-15 20:39:33.691393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.448 [2024-07-15 20:39:33.703366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.448 [2024-07-15 20:39:33.703400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.448 [2024-07-15 20:39:33.703419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.448 [2024-07-15 20:39:33.715645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.448 [2024-07-15 20:39:33.715680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.448 [2024-07-15 20:39:33.715700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.448 [2024-07-15 20:39:33.727785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.448 [2024-07-15 20:39:33.727818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.448 [2024-07-15 20:39:33.727837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.448 [2024-07-15 20:39:33.739968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.448 [2024-07-15 20:39:33.740011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.448 [2024-07-15 20:39:33.740027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.448 [2024-07-15 20:39:33.752276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.448 [2024-07-15 20:39:33.752308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.448 [2024-07-15 20:39:33.752327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.448 [2024-07-15 20:39:33.764622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.448 [2024-07-15 20:39:33.764655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.448 [2024-07-15 20:39:33.764675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.448 [2024-07-15 20:39:33.776947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.448 [2024-07-15 20:39:33.776976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.448 [2024-07-15 20:39:33.776993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.448 [2024-07-15 20:39:33.789035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.448 [2024-07-15 20:39:33.789080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.448 [2024-07-15 20:39:33.789102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.448 [2024-07-15 20:39:33.801157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.448 [2024-07-15 20:39:33.801204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.448 [2024-07-15 20:39:33.801224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.448 [2024-07-15 20:39:33.813467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.448 [2024-07-15 20:39:33.813500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.448 [2024-07-15 20:39:33.813518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.448 [2024-07-15 20:39:33.825596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.448 [2024-07-15 20:39:33.825629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.449 [2024-07-15 20:39:33.825648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.449 [2024-07-15 20:39:33.837945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.449 [2024-07-15 20:39:33.837975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.449 [2024-07-15 20:39:33.837991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.449 [2024-07-15 20:39:33.850080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.449 [2024-07-15 20:39:33.850109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.449 [2024-07-15 20:39:33.850125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.449 [2024-07-15 20:39:33.862114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.449 [2024-07-15 20:39:33.862143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.449 [2024-07-15 20:39:33.862159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.449 [2024-07-15 20:39:33.874444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.449 [2024-07-15 20:39:33.874478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.449 [2024-07-15 20:39:33.874496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.449 [2024-07-15 20:39:33.886367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.449 [2024-07-15 20:39:33.886400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.449 [2024-07-15 20:39:33.886418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.449 [2024-07-15 20:39:33.898406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.449 [2024-07-15 20:39:33.898439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.449 [2024-07-15 20:39:33.898458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.449 [2024-07-15 20:39:33.910477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.449 [2024-07-15 20:39:33.910509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.449 [2024-07-15 20:39:33.910529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.449 [2024-07-15 20:39:33.922800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.449 [2024-07-15 20:39:33.922833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.449 [2024-07-15 20:39:33.922852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.449 [2024-07-15 20:39:33.935092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.449 [2024-07-15 20:39:33.935120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.449 [2024-07-15 20:39:33.935137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.449 [2024-07-15 20:39:33.947249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.449 [2024-07-15 20:39:33.947276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.449 [2024-07-15 20:39:33.947292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.449 [2024-07-15 20:39:33.959530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.449 [2024-07-15 20:39:33.959562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.449 [2024-07-15 20:39:33.959581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.449 [2024-07-15 20:39:33.971644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.449 [2024-07-15 20:39:33.971678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.449 [2024-07-15 20:39:33.971697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:33.983810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:33.983842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:33.983861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:33.995988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:33.996032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:33.996058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.008172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.008200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.008234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.020345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.020378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.020397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.032498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.032530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.032549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.044514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.044547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.044566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.056754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.056785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.056804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.068791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.068823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.068841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.081171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.081217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.081236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.093378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.093410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.093429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.105463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.105503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.105523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.117629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.117662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.117681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.129698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.129730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.129749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.141953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.141981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.141998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.153967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.154015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.154032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.166401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.707 [2024-07-15 20:39:34.166433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.707 [2024-07-15 20:39:34.166451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.707 [2024-07-15 20:39:34.178510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.708 [2024-07-15 20:39:34.178543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.708 [2024-07-15 20:39:34.178561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.708 [2024-07-15 20:39:34.190557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.708 [2024-07-15 20:39:34.190590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.708 [2024-07-15 20:39:34.190609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.708 [2024-07-15 20:39:34.202661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.708 [2024-07-15 20:39:34.202693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.708 [2024-07-15 20:39:34.202712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.708 [2024-07-15 20:39:34.214885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.708 [2024-07-15 20:39:34.214930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.708 [2024-07-15 20:39:34.214946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.708 [2024-07-15 20:39:34.227094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.708 [2024-07-15 20:39:34.227124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.708 [2024-07-15 20:39:34.227142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.965 [2024-07-15 20:39:34.239080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.965 [2024-07-15 20:39:34.239111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.965 [2024-07-15 20:39:34.239127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.965 [2024-07-15 20:39:34.251248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.965 [2024-07-15 20:39:34.251281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.965 [2024-07-15 20:39:34.251299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.965 [2024-07-15 20:39:34.263475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.263508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.263527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.275933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.275963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.275979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.287771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.287805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.287823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.299979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.300009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.300026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.312150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.312197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.312222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.324362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.324394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.324413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.336466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.336497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.336516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.348508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.348539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.348557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.360526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.360559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.360577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.372632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.372665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.372684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.384704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.384736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.384755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.396870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.396925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.396941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.409132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.409159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.409190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.421295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.421332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.421352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.433402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.433433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.433452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.445557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.445589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.445608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.457667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.457698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.457717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.470148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.470193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.470211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.482383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.966 [2024-07-15 20:39:34.482418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.966 [2024-07-15 20:39:34.482437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.966 [2024-07-15 20:39:34.494504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:55.967 [2024-07-15 20:39:34.494537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.967 [2024-07-15 20:39:34.494555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.224 [2024-07-15 20:39:34.506773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:56.224 [2024-07-15 20:39:34.506805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.224 [2024-07-15 20:39:34.506824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.224 [2024-07-15 20:39:34.518984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:56.224 [2024-07-15 20:39:34.519013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.224 [2024-07-15 20:39:34.519036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.224 [2024-07-15 20:39:34.531304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:56.224 [2024-07-15 20:39:34.531337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.224 [2024-07-15 20:39:34.531355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.224 [2024-07-15 20:39:34.543738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:56.224 [2024-07-15 20:39:34.543770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.224 [2024-07-15 20:39:34.543790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.224 [2024-07-15 20:39:34.555958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:56.224 [2024-07-15 20:39:34.555987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.224 [2024-07-15 20:39:34.556005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.224 [2024-07-15 20:39:34.568322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:56.224 [2024-07-15 20:39:34.568356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.224 [2024-07-15 20:39:34.568378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.224 [2024-07-15 20:39:34.580454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:56.224 [2024-07-15 20:39:34.580488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.224 [2024-07-15 20:39:34.580518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.224 [2024-07-15 20:39:34.592748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:56.224 [2024-07-15 20:39:34.592781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.224 [2024-07-15 20:39:34.592799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.224 [2024-07-15 20:39:34.605089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:56.224 [2024-07-15 20:39:34.605118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.224 [2024-07-15 20:39:34.605138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.224 [2024-07-15 20:39:34.617181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:56.224 [2024-07-15 20:39:34.617227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.224 [2024-07-15 20:39:34.617251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.224 [2024-07-15 20:39:34.629291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:56.224 [2024-07-15 20:39:34.629329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.224 [2024-07-15 20:39:34.629348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.224 [2024-07-15 20:39:34.641638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:56.224 [2024-07-15 20:39:34.641672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.224 [2024-07-15 20:39:34.641691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.224 [2024-07-15 20:39:34.653767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe803d0) 00:33:56.224 [2024-07-15 20:39:34.653800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.224 [2024-07-15 20:39:34.653819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.224 00:33:56.225 Latency(us) 00:33:56.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.225 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:56.225 nvme0n1 : 2.01 2611.71 326.46 0.00 0.00 6120.74 5291.43 12913.02 00:33:56.225 =================================================================================================================== 00:33:56.225 Total : 2611.71 326.46 0.00 0.00 6120.74 5291.43 12913.02 00:33:56.225 0 00:33:56.225 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:56.225 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:56.225 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:56.225 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:56.225 | .driver_specific 00:33:56.225 | .nvme_error 00:33:56.225 | .status_code 00:33:56.225 | .command_transient_transport_error' 00:33:56.482 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 168 > 0 )) 00:33:56.482 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 8747 00:33:56.482 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 8747 ']' 00:33:56.482 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 8747 00:33:56.482 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:56.482 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:56.482 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 8747 00:33:56.482 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:56.482 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:56.482 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 8747' 00:33:56.482 killing process with pid 8747 00:33:56.482 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 8747 00:33:56.482 Received shutdown signal, test time was about 2.000000 seconds 00:33:56.482 00:33:56.482 Latency(us) 00:33:56.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.482 =================================================================================================================== 00:33:56.482 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:56.482 20:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 8747 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=9152 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 9152 /var/tmp/bperf.sock 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 9152 ']' 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:56.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:56.741 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.741 [2024-07-15 20:39:35.204705] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:33:56.741 [2024-07-15 20:39:35.204791] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid9152 ] 00:33:56.741 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.741 [2024-07-15 20:39:35.270825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.034 [2024-07-15 20:39:35.361754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.034 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:57.034 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:57.034 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:57.034 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:57.291 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:57.292 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.292 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.292 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.292 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:57.292 20:39:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:57.857 nvme0n1 00:33:57.857 20:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:57.857 20:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.857 20:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.857 20:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.857 20:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:57.857 20:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:57.857 Running I/O for 2 seconds... 00:33:57.857 [2024-07-15 20:39:36.299240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:57.857 [2024-07-15 20:39:36.299595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-07-15 20:39:36.299634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-07-15 20:39:36.313508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:57.857 [2024-07-15 20:39:36.313810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-07-15 20:39:36.313844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-07-15 20:39:36.327703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:57.857 [2024-07-15 20:39:36.328061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-07-15 20:39:36.328090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-07-15 20:39:36.341929] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:57.857 [2024-07-15 20:39:36.342228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-07-15 20:39:36.342260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-07-15 20:39:36.356107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:57.857 [2024-07-15 20:39:36.356449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-07-15 20:39:36.356481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-07-15 20:39:36.370314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:57.857 [2024-07-15 20:39:36.370642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-07-15 20:39:36.370674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-07-15 20:39:36.384448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:57.857 [2024-07-15 20:39:36.384777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-07-15 20:39:36.384809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.115 [2024-07-15 20:39:36.398492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.115 [2024-07-15 20:39:36.398806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-15 20:39:36.398836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.115 [2024-07-15 20:39:36.411997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.115 [2024-07-15 20:39:36.412284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-15 20:39:36.412314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.115 [2024-07-15 20:39:36.425797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.115 [2024-07-15 20:39:36.426159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-15 20:39:36.426187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.115 [2024-07-15 20:39:36.439846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.115 [2024-07-15 20:39:36.440340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-15 20:39:36.440372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.115 [2024-07-15 20:39:36.453904] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.115 [2024-07-15 20:39:36.454277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-15 20:39:36.454307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.115 [2024-07-15 20:39:36.467121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.115 [2024-07-15 20:39:36.467400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-15 20:39:36.467428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.116 [2024-07-15 20:39:36.479787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.116 [2024-07-15 20:39:36.480104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.116 [2024-07-15 20:39:36.480132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.116 [2024-07-15 20:39:36.492614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.116 [2024-07-15 20:39:36.492937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.116 [2024-07-15 20:39:36.492967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.116 [2024-07-15 20:39:36.505802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.116 [2024-07-15 20:39:36.506126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.116 [2024-07-15 20:39:36.506154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.116 [2024-07-15 20:39:36.518580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.116 [2024-07-15 20:39:36.518890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.116 [2024-07-15 20:39:36.518920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.116 [2024-07-15 20:39:36.531522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.116 [2024-07-15 20:39:36.531781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.116 [2024-07-15 20:39:36.531809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.116 [2024-07-15 20:39:36.544630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.116 [2024-07-15 20:39:36.544905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.116 [2024-07-15 20:39:36.544934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.116 [2024-07-15 20:39:36.557666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.116 [2024-07-15 20:39:36.557939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.116 [2024-07-15 20:39:36.557969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.116 [2024-07-15 20:39:36.570536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.116 [2024-07-15 20:39:36.570814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.116 [2024-07-15 20:39:36.570843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.116 [2024-07-15 20:39:36.583341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.116 [2024-07-15 20:39:36.583726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.116 [2024-07-15 20:39:36.583755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.116 [2024-07-15 20:39:36.596216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.116 [2024-07-15 20:39:36.596521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.116 [2024-07-15 20:39:36.596550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.116 [2024-07-15 20:39:36.609107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.116 [2024-07-15 20:39:36.609470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.116 [2024-07-15 20:39:36.609499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.116 [2024-07-15 20:39:36.621983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.116 [2024-07-15 20:39:36.622358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.116 [2024-07-15 20:39:36.622387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.116 [2024-07-15 20:39:36.634704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.116 [2024-07-15 20:39:36.635011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.116 [2024-07-15 20:39:36.635039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.647604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.647902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.647932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.660677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.660991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.661020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.673590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.673851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.673903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.686569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.686859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.686911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.699335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.699590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.699618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.712086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.712432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.712459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.725020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.725394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.725422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.737975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.738376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.738410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.750927] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.751280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.751308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.763756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.764140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.764169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.776537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.776827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.776854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.789346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.789696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.789724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.802368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.802626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.802653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.815224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.815484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.815510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.827870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.828230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.828259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.840722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.841106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.841135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.853533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.853799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.853826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.866490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.866750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.866777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.879321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.879654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.879682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.374 [2024-07-15 20:39:36.892257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.374 [2024-07-15 20:39:36.892517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-15 20:39:36.892544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.632 [2024-07-15 20:39:36.905008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.632 [2024-07-15 20:39:36.905311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.632 [2024-07-15 20:39:36.905338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.632 [2024-07-15 20:39:36.917976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.632 [2024-07-15 20:39:36.918327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.632 [2024-07-15 20:39:36.918356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.632 [2024-07-15 20:39:36.930874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.632 [2024-07-15 20:39:36.931246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.632 [2024-07-15 20:39:36.931274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.632 [2024-07-15 20:39:36.943744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.632 [2024-07-15 20:39:36.944115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:36.944143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:36.956579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:36.956867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:36.956917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:36.969517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:36.969776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:36.969804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:36.982505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:36.982873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:36.982925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:36.995264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:36.995522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:36.995550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:37.008126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:37.008475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:37.008503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:37.020961] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:37.021270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:37.021298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:37.033705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:37.034017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:37.034046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:37.046928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:37.047288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:37.047319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:37.060951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:37.061325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:37.061357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:37.075039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:37.075361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:37.075391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:37.089209] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:37.089538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:37.089571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:37.103264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:37.103564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:37.103595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:37.117368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:37.117665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:37.117697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:37.131408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:37.131706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:37.131737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:37.145515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:37.145837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:37.145868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.633 [2024-07-15 20:39:37.159470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.633 [2024-07-15 20:39:37.159768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-15 20:39:37.159798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.891 [2024-07-15 20:39:37.173482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.891 [2024-07-15 20:39:37.173782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.891 [2024-07-15 20:39:37.173812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.891 [2024-07-15 20:39:37.187525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.891 [2024-07-15 20:39:37.187817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.891 [2024-07-15 20:39:37.187848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.891 [2024-07-15 20:39:37.201609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.891 [2024-07-15 20:39:37.201948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.891 [2024-07-15 20:39:37.201981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.891 [2024-07-15 20:39:37.215646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.891 [2024-07-15 20:39:37.215961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.891 [2024-07-15 20:39:37.215989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.891 [2024-07-15 20:39:37.229678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.891 [2024-07-15 20:39:37.230013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.891 [2024-07-15 20:39:37.230041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.891 [2024-07-15 20:39:37.243712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.891 [2024-07-15 20:39:37.244092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.891 [2024-07-15 20:39:37.244119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.891 [2024-07-15 20:39:37.257772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.891 [2024-07-15 20:39:37.258148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.891 [2024-07-15 20:39:37.258175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.891 [2024-07-15 20:39:37.271834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.891 [2024-07-15 20:39:37.272176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.891 [2024-07-15 20:39:37.272218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.891 [2024-07-15 20:39:37.285935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.891 [2024-07-15 20:39:37.286320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.891 [2024-07-15 20:39:37.286353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.891 [2024-07-15 20:39:37.300047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.891 [2024-07-15 20:39:37.300382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.891 [2024-07-15 20:39:37.300411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.891 [2024-07-15 20:39:37.314002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.891 [2024-07-15 20:39:37.314361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.891 [2024-07-15 20:39:37.314391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.891 [2024-07-15 20:39:37.327967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.891 [2024-07-15 20:39:37.328328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.891 [2024-07-15 20:39:37.328361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.891 [2024-07-15 20:39:37.342016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.891 [2024-07-15 20:39:37.342364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.891 [2024-07-15 20:39:37.342396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.892 [2024-07-15 20:39:37.356143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.892 [2024-07-15 20:39:37.356467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.892 [2024-07-15 20:39:37.356497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.892 [2024-07-15 20:39:37.370309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.892 [2024-07-15 20:39:37.370634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.892 [2024-07-15 20:39:37.370665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.892 [2024-07-15 20:39:37.384446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.892 [2024-07-15 20:39:37.384745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.892 [2024-07-15 20:39:37.384777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.892 [2024-07-15 20:39:37.398530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.892 [2024-07-15 20:39:37.398851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.892 [2024-07-15 20:39:37.398889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.892 [2024-07-15 20:39:37.412657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:58.892 [2024-07-15 20:39:37.412995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.892 [2024-07-15 20:39:37.413022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.426651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.427001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.427028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.440685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.440994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.441036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.454831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.455235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.455266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.468847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.469142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.469187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.483001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.483325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.483356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.497082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.497419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.497450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.511180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.511520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.511550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.525327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.525624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.525654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.539453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.539788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.539818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.553451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.553776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.553807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.567589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.567939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.567967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.581659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.581991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.582017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.595807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.596144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.596172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.609905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.610251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.610282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.623797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.624126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.624172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.637764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.638100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.638128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.651660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.651971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.652015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.150 [2024-07-15 20:39:37.665888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.150 [2024-07-15 20:39:37.666310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.150 [2024-07-15 20:39:37.666342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.679979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.680265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.680296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.694011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.694344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.694380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.708048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.708376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.708407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.722212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.722560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.722591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.736202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.736497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.736529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.750273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.750572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.750604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.764338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.764633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.764666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.778356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.778654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.778686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.792447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.792747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.792779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.806557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.806857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.806898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.820609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.820939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.820966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.834669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.835012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.835040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.848736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.849068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.849097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.862930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.863223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.863255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.876963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.877276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.877307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.891011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.891332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.891362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.905066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.905392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.905424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.919043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.919373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.919403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.409 [2024-07-15 20:39:37.933125] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.409 [2024-07-15 20:39:37.933469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.409 [2024-07-15 20:39:37.933500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.667 [2024-07-15 20:39:37.947119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.667 [2024-07-15 20:39:37.947461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.667 [2024-07-15 20:39:37.947492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.667 [2024-07-15 20:39:37.961197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.667 [2024-07-15 20:39:37.961541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.667 [2024-07-15 20:39:37.961571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.667 [2024-07-15 20:39:37.975341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.667 [2024-07-15 20:39:37.975666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.667 [2024-07-15 20:39:37.975698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.667 [2024-07-15 20:39:37.989486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.667 [2024-07-15 20:39:37.989787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.667 [2024-07-15 20:39:37.989819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.667 [2024-07-15 20:39:38.003645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.667 [2024-07-15 20:39:38.003978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.667 [2024-07-15 20:39:38.004022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.667 [2024-07-15 20:39:38.017701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.667 [2024-07-15 20:39:38.018023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.667 [2024-07-15 20:39:38.018050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.668 [2024-07-15 20:39:38.031732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.668 [2024-07-15 20:39:38.032072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.668 [2024-07-15 20:39:38.032116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.668 [2024-07-15 20:39:38.045705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.668 [2024-07-15 20:39:38.046023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.668 [2024-07-15 20:39:38.046065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.668 [2024-07-15 20:39:38.059816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.668 [2024-07-15 20:39:38.060208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.668 [2024-07-15 20:39:38.060239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.668 [2024-07-15 20:39:38.074050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.668 [2024-07-15 20:39:38.074378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.668 [2024-07-15 20:39:38.074411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.668 [2024-07-15 20:39:38.088077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.668 [2024-07-15 20:39:38.088409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.668 [2024-07-15 20:39:38.088440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.668 [2024-07-15 20:39:38.102130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.668 [2024-07-15 20:39:38.102469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.668 [2024-07-15 20:39:38.102502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.668 [2024-07-15 20:39:38.116003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.668 [2024-07-15 20:39:38.116296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.668 [2024-07-15 20:39:38.116328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.668 [2024-07-15 20:39:38.129958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.668 [2024-07-15 20:39:38.130317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.668 [2024-07-15 20:39:38.130349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.668 [2024-07-15 20:39:38.143829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.668 [2024-07-15 20:39:38.144173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.668 [2024-07-15 20:39:38.144201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.668 [2024-07-15 20:39:38.157787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.668 [2024-07-15 20:39:38.158187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.668 [2024-07-15 20:39:38.158215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.668 [2024-07-15 20:39:38.171823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.668 [2024-07-15 20:39:38.172270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.668 [2024-07-15 20:39:38.172301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.668 [2024-07-15 20:39:38.185902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.668 [2024-07-15 20:39:38.186283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.668 [2024-07-15 20:39:38.186320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.926 [2024-07-15 20:39:38.199960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.926 [2024-07-15 20:39:38.200279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.926 [2024-07-15 20:39:38.200311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.926 [2024-07-15 20:39:38.214010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.926 [2024-07-15 20:39:38.214340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.926 [2024-07-15 20:39:38.214370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.926 [2024-07-15 20:39:38.228177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.926 [2024-07-15 20:39:38.228516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.926 [2024-07-15 20:39:38.228547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.926 [2024-07-15 20:39:38.242214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.926 [2024-07-15 20:39:38.242540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.926 [2024-07-15 20:39:38.242571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.926 [2024-07-15 20:39:38.256299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.926 [2024-07-15 20:39:38.256593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.926 [2024-07-15 20:39:38.256624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.926 [2024-07-15 20:39:38.270314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.926 [2024-07-15 20:39:38.270638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.926 [2024-07-15 20:39:38.270669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.926 [2024-07-15 20:39:38.284349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96f990) with pdu=0x2000190fdeb0 00:33:59.926 [2024-07-15 20:39:38.284682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.926 [2024-07-15 20:39:38.284713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.926 00:33:59.926 Latency(us) 00:33:59.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.926 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:59.926 nvme0n1 : 2.01 18592.72 72.63 0.00 0.00 6867.51 6043.88 15340.28 00:33:59.926 =================================================================================================================== 00:33:59.926 Total : 18592.72 72.63 0.00 0.00 6867.51 6043.88 15340.28 00:33:59.926 0 00:33:59.926 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:59.926 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:59.926 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:59.926 | .driver_specific 00:33:59.926 | .nvme_error 00:33:59.926 | .status_code 00:33:59.926 | .command_transient_transport_error' 00:33:59.926 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:00.184 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:34:00.184 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 9152 00:34:00.184 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 9152 ']' 00:34:00.184 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 9152 00:34:00.184 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:00.184 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:00.184 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 9152 00:34:00.184 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:00.184 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:00.184 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 9152' 00:34:00.184 killing process with pid 9152 00:34:00.184 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 9152 00:34:00.184 Received shutdown signal, test time was about 2.000000 seconds 00:34:00.184 00:34:00.184 Latency(us) 00:34:00.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.184 =================================================================================================================== 00:34:00.184 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:00.184 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 9152 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=9564 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 9564 /var/tmp/bperf.sock 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 9564 ']' 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:00.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:00.442 20:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:00.442 [2024-07-15 20:39:38.870795] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:34:00.442 [2024-07-15 20:39:38.870899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid9564 ] 00:34:00.442 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:00.442 Zero copy mechanism will not be used. 00:34:00.442 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.442 [2024-07-15 20:39:38.932564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.700 [2024-07-15 20:39:39.023821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:00.700 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:00.700 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:00.700 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:00.700 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:00.958 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:00.958 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.958 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:00.958 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.958 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:00.958 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:01.523 nvme0n1 00:34:01.523 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:01.523 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.523 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:01.523 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.523 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:01.523 20:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:01.523 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:01.523 Zero copy mechanism will not be used. 00:34:01.523 Running I/O for 2 seconds... 00:34:01.523 [2024-07-15 20:39:40.010313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.523 [2024-07-15 20:39:40.010745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.523 [2024-07-15 20:39:40.010790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.523 [2024-07-15 20:39:40.026128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.524 [2024-07-15 20:39:40.026678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.524 [2024-07-15 20:39:40.026719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.524 [2024-07-15 20:39:40.042630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.524 [2024-07-15 20:39:40.043139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.524 [2024-07-15 20:39:40.043196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.781 [2024-07-15 20:39:40.059990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.781 [2024-07-15 20:39:40.060424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.781 [2024-07-15 20:39:40.060462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.781 [2024-07-15 20:39:40.077620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.781 [2024-07-15 20:39:40.078095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.781 [2024-07-15 20:39:40.078126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.781 [2024-07-15 20:39:40.096051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.782 [2024-07-15 20:39:40.096522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.782 [2024-07-15 20:39:40.096551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.782 [2024-07-15 20:39:40.113425] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.782 [2024-07-15 20:39:40.113773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.782 [2024-07-15 20:39:40.113803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.782 [2024-07-15 20:39:40.130532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.782 [2024-07-15 20:39:40.131047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.782 [2024-07-15 20:39:40.131092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.782 [2024-07-15 20:39:40.148156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.782 [2024-07-15 20:39:40.148645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.782 [2024-07-15 20:39:40.148688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.782 [2024-07-15 20:39:40.164757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.782 [2024-07-15 20:39:40.165192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.782 [2024-07-15 20:39:40.165221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.782 [2024-07-15 20:39:40.180497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.782 [2024-07-15 20:39:40.180935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.782 [2024-07-15 20:39:40.180965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.782 [2024-07-15 20:39:40.198658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.782 [2024-07-15 20:39:40.199115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.782 [2024-07-15 20:39:40.199144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.782 [2024-07-15 20:39:40.216654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.782 [2024-07-15 20:39:40.217056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.782 [2024-07-15 20:39:40.217087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.782 [2024-07-15 20:39:40.234319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.782 [2024-07-15 20:39:40.234679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.782 [2024-07-15 20:39:40.234721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.782 [2024-07-15 20:39:40.253167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.782 [2024-07-15 20:39:40.253529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.782 [2024-07-15 20:39:40.253572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.782 [2024-07-15 20:39:40.269807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.782 [2024-07-15 20:39:40.270268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.782 [2024-07-15 20:39:40.270314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.782 [2024-07-15 20:39:40.286634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.782 [2024-07-15 20:39:40.287048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.782 [2024-07-15 20:39:40.287080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.782 [2024-07-15 20:39:40.304356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:01.782 [2024-07-15 20:39:40.304724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.782 [2024-07-15 20:39:40.304753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.322568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.323020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.323065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.341145] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.341500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.341533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.359114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.359488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.359517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.377091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.377469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.377512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.395071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.395525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.395552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.412540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.412954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.412983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.429366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.429743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.429786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.446180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.446557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.446585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.464268] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.464484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.464512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.481449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.481799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.481829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.500435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.501004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.501034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.519518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.519786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.519815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.538350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.538620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.538649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.040 [2024-07-15 20:39:40.556435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.040 [2024-07-15 20:39:40.556788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.040 [2024-07-15 20:39:40.556817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.574005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.574389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.574435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.593054] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.593441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.593469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.610241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.610755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.610783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.628668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.629010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.629041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.646683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.647240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.647273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.663381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.663731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.663759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.680985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.681345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.681375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.699288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.699642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.699670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.716730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.717203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.717251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.735311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.735672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.735715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.753305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.753669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.753714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.772747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.773198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.773243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.792956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.793441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.793470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.809252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.809635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.809679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.299 [2024-07-15 20:39:40.827124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.299 [2024-07-15 20:39:40.827486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.299 [2024-07-15 20:39:40.827531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.556 [2024-07-15 20:39:40.844037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.556 [2024-07-15 20:39:40.844430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.556 [2024-07-15 20:39:40.844474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.556 [2024-07-15 20:39:40.859678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.556 [2024-07-15 20:39:40.860068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.556 [2024-07-15 20:39:40.860098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.556 [2024-07-15 20:39:40.877532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.556 [2024-07-15 20:39:40.877898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.556 [2024-07-15 20:39:40.877929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.556 [2024-07-15 20:39:40.895628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.556 [2024-07-15 20:39:40.896047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.556 [2024-07-15 20:39:40.896090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.556 [2024-07-15 20:39:40.913551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.556 [2024-07-15 20:39:40.913978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.556 [2024-07-15 20:39:40.914022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.556 [2024-07-15 20:39:40.930801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.556 [2024-07-15 20:39:40.931159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.556 [2024-07-15 20:39:40.931189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.556 [2024-07-15 20:39:40.948493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.556 [2024-07-15 20:39:40.948891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.557 [2024-07-15 20:39:40.948933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.557 [2024-07-15 20:39:40.966473] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.557 [2024-07-15 20:39:40.966939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.557 [2024-07-15 20:39:40.966968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.557 [2024-07-15 20:39:40.984053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.557 [2024-07-15 20:39:40.984349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.557 [2024-07-15 20:39:40.984378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.557 [2024-07-15 20:39:41.003057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.557 [2024-07-15 20:39:41.003476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.557 [2024-07-15 20:39:41.003504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.557 [2024-07-15 20:39:41.021528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.557 [2024-07-15 20:39:41.021992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.557 [2024-07-15 20:39:41.022034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.557 [2024-07-15 20:39:41.040264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.557 [2024-07-15 20:39:41.040689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.557 [2024-07-15 20:39:41.040735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.557 [2024-07-15 20:39:41.058700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.557 [2024-07-15 20:39:41.059134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.557 [2024-07-15 20:39:41.059164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.557 [2024-07-15 20:39:41.075608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.557 [2024-07-15 20:39:41.075841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.557 [2024-07-15 20:39:41.075872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.815 [2024-07-15 20:39:41.094096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.815 [2024-07-15 20:39:41.094506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.815 [2024-07-15 20:39:41.094549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.815 [2024-07-15 20:39:41.111505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.815 [2024-07-15 20:39:41.112005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.815 [2024-07-15 20:39:41.112053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.816 [2024-07-15 20:39:41.129788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.816 [2024-07-15 20:39:41.130160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.816 [2024-07-15 20:39:41.130191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.816 [2024-07-15 20:39:41.147465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.816 [2024-07-15 20:39:41.147839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.816 [2024-07-15 20:39:41.147868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.816 [2024-07-15 20:39:41.165910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.816 [2024-07-15 20:39:41.166296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.816 [2024-07-15 20:39:41.166342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.816 [2024-07-15 20:39:41.183329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.816 [2024-07-15 20:39:41.183707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.816 [2024-07-15 20:39:41.183754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.816 [2024-07-15 20:39:41.199601] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.816 [2024-07-15 20:39:41.200010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.816 [2024-07-15 20:39:41.200039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.816 [2024-07-15 20:39:41.216785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.816 [2024-07-15 20:39:41.217181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.816 [2024-07-15 20:39:41.217213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.816 [2024-07-15 20:39:41.234482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.816 [2024-07-15 20:39:41.234970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.816 [2024-07-15 20:39:41.235001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.816 [2024-07-15 20:39:41.250039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.816 [2024-07-15 20:39:41.250577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.816 [2024-07-15 20:39:41.250604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.816 [2024-07-15 20:39:41.266650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.816 [2024-07-15 20:39:41.267048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.816 [2024-07-15 20:39:41.267091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.816 [2024-07-15 20:39:41.285552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.816 [2024-07-15 20:39:41.285950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.816 [2024-07-15 20:39:41.285980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.816 [2024-07-15 20:39:41.302099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.816 [2024-07-15 20:39:41.302550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.816 [2024-07-15 20:39:41.302577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.816 [2024-07-15 20:39:41.318722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.816 [2024-07-15 20:39:41.319166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.816 [2024-07-15 20:39:41.319208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.816 [2024-07-15 20:39:41.336832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:02.816 [2024-07-15 20:39:41.337327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.816 [2024-07-15 20:39:41.337356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.355347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.355824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.355867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.374140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.374493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.374535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.390814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.391210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.391254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.408835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.409275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.409319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.426618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.426997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.427039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.445337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.445777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.445804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.462149] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.462626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.462653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.477716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.478151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.478206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.495349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.495747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.495774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.513150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.513584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.513635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.531733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.532280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.532321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.549961] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.550399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.550427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.567924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.568307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.568349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.585748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.586294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.586322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.074 [2024-07-15 20:39:41.602820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.074 [2024-07-15 20:39:41.603229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.074 [2024-07-15 20:39:41.603282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.332 [2024-07-15 20:39:41.620631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.332 [2024-07-15 20:39:41.621148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.332 [2024-07-15 20:39:41.621190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.332 [2024-07-15 20:39:41.639120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.332 [2024-07-15 20:39:41.639562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.332 [2024-07-15 20:39:41.639589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.332 [2024-07-15 20:39:41.657215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.332 [2024-07-15 20:39:41.657646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.332 [2024-07-15 20:39:41.657692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.332 [2024-07-15 20:39:41.674417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.332 [2024-07-15 20:39:41.674838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.332 [2024-07-15 20:39:41.674887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.332 [2024-07-15 20:39:41.692469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.332 [2024-07-15 20:39:41.692867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.332 [2024-07-15 20:39:41.692918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.332 [2024-07-15 20:39:41.710093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.332 [2024-07-15 20:39:41.710495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.332 [2024-07-15 20:39:41.710547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.332 [2024-07-15 20:39:41.728369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.332 [2024-07-15 20:39:41.728664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.332 [2024-07-15 20:39:41.728692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.332 [2024-07-15 20:39:41.747118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.332 [2024-07-15 20:39:41.747489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.332 [2024-07-15 20:39:41.747543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.332 [2024-07-15 20:39:41.765520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.332 [2024-07-15 20:39:41.765929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.332 [2024-07-15 20:39:41.765959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.332 [2024-07-15 20:39:41.783327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.332 [2024-07-15 20:39:41.783695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.332 [2024-07-15 20:39:41.783738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.332 [2024-07-15 20:39:41.802506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.333 [2024-07-15 20:39:41.802902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.333 [2024-07-15 20:39:41.802947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.333 [2024-07-15 20:39:41.820595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.333 [2024-07-15 20:39:41.821076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.333 [2024-07-15 20:39:41.821106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.333 [2024-07-15 20:39:41.839081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.333 [2024-07-15 20:39:41.839564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.333 [2024-07-15 20:39:41.839591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.333 [2024-07-15 20:39:41.858081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.333 [2024-07-15 20:39:41.858430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.333 [2024-07-15 20:39:41.858460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.591 [2024-07-15 20:39:41.873955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.591 [2024-07-15 20:39:41.874391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.591 [2024-07-15 20:39:41.874425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.591 [2024-07-15 20:39:41.890411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.591 [2024-07-15 20:39:41.890890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.591 [2024-07-15 20:39:41.890939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.591 [2024-07-15 20:39:41.904951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.591 [2024-07-15 20:39:41.905378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.591 [2024-07-15 20:39:41.905423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.591 [2024-07-15 20:39:41.922096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.591 [2024-07-15 20:39:41.922470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.591 [2024-07-15 20:39:41.922519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.591 [2024-07-15 20:39:41.940066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.591 [2024-07-15 20:39:41.940563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.591 [2024-07-15 20:39:41.940591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.591 [2024-07-15 20:39:41.957554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.591 [2024-07-15 20:39:41.958060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.591 [2024-07-15 20:39:41.958102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.591 [2024-07-15 20:39:41.975968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.591 [2024-07-15 20:39:41.976355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.591 [2024-07-15 20:39:41.976404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.591 [2024-07-15 20:39:41.991640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x96fcd0) with pdu=0x2000190fef90 00:34:03.591 [2024-07-15 20:39:41.992087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.591 [2024-07-15 20:39:41.992116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.591 00:34:03.591 Latency(us) 00:34:03.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.591 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:03.591 nvme0n1 : 2.01 1755.88 219.49 0.00 0.00 9086.60 2791.35 20486.07 00:34:03.591 =================================================================================================================== 00:34:03.591 Total : 1755.88 219.49 0.00 0.00 9086.60 2791.35 20486.07 00:34:03.591 0 00:34:03.591 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:03.591 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:03.591 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:03.591 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:03.591 | .driver_specific 00:34:03.591 | .nvme_error 00:34:03.591 | .status_code 00:34:03.591 | .command_transient_transport_error' 00:34:03.849 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 113 > 0 )) 00:34:03.849 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 9564 00:34:03.849 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 9564 ']' 00:34:03.849 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 9564 00:34:03.849 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:03.849 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:03.849 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 9564 00:34:03.849 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:03.849 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:03.849 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 9564' 00:34:03.849 killing process with pid 9564 00:34:03.849 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 9564 00:34:03.849 Received shutdown signal, test time was about 2.000000 seconds 00:34:03.849 00:34:03.849 Latency(us) 00:34:03.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.849 =================================================================================================================== 00:34:03.849 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:03.849 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 9564 00:34:04.107 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 8196 00:34:04.107 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 8196 ']' 00:34:04.107 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 8196 00:34:04.107 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:04.107 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:04.107 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 8196 00:34:04.107 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:04.107 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:04.107 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 8196' 00:34:04.107 killing process with pid 8196 00:34:04.107 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 8196 00:34:04.107 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 8196 00:34:04.365 00:34:04.365 real 0m15.200s 00:34:04.365 user 0m30.555s 00:34:04.365 sys 0m3.809s 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:04.365 ************************************ 00:34:04.365 END TEST nvmf_digest_error 00:34:04.365 ************************************ 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:04.365 rmmod nvme_tcp 00:34:04.365 rmmod nvme_fabrics 00:34:04.365 rmmod nvme_keyring 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 8196 ']' 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 8196 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 8196 ']' 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 8196 00:34:04.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (8196) - No such process 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 8196 is not found' 00:34:04.365 Process with pid 8196 is not found 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:04.365 20:39:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.900 20:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:06.900 00:34:06.900 real 0m34.806s 00:34:06.900 user 1m2.043s 00:34:06.900 sys 0m9.134s 00:34:06.900 20:39:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:06.900 20:39:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:06.900 ************************************ 00:34:06.900 END TEST nvmf_digest 00:34:06.900 ************************************ 00:34:06.900 20:39:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:06.900 20:39:44 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:34:06.900 20:39:44 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:34:06.900 20:39:44 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:34:06.900 20:39:44 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:06.900 20:39:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:06.900 20:39:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:06.900 20:39:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:06.900 ************************************ 00:34:06.900 START TEST nvmf_bdevperf 00:34:06.900 ************************************ 00:34:06.900 20:39:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:06.900 * Looking for test storage... 00:34:06.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:06.900 20:39:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:08.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:08.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:08.801 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:08.802 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:08.802 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:08.802 20:39:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:08.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:08.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:34:08.802 00:34:08.802 --- 10.0.0.2 ping statistics --- 00:34:08.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.802 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:08.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:08.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:34:08.802 00:34:08.802 --- 10.0.0.1 ping statistics --- 00:34:08.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.802 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=11909 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 11909 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 11909 ']' 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:08.802 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.802 [2024-07-15 20:39:47.180637] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:34:08.802 [2024-07-15 20:39:47.180709] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.802 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.802 [2024-07-15 20:39:47.246131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:09.061 [2024-07-15 20:39:47.337658] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:09.061 [2024-07-15 20:39:47.337711] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:09.061 [2024-07-15 20:39:47.337735] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:09.061 [2024-07-15 20:39:47.337748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:09.061 [2024-07-15 20:39:47.337760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:09.061 [2024-07-15 20:39:47.337854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:09.061 [2024-07-15 20:39:47.337971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:09.061 [2024-07-15 20:39:47.337975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.061 [2024-07-15 20:39:47.471982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.061 Malloc0 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.061 [2024-07-15 20:39:47.534497] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:09.061 { 00:34:09.061 "params": { 00:34:09.061 "name": "Nvme$subsystem", 00:34:09.061 "trtype": "$TEST_TRANSPORT", 00:34:09.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:09.061 "adrfam": "ipv4", 00:34:09.061 "trsvcid": "$NVMF_PORT", 00:34:09.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:09.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:09.061 "hdgst": ${hdgst:-false}, 00:34:09.061 "ddgst": ${ddgst:-false} 00:34:09.061 }, 00:34:09.061 "method": "bdev_nvme_attach_controller" 00:34:09.061 } 00:34:09.061 EOF 00:34:09.061 )") 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:09.061 20:39:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:09.061 "params": { 00:34:09.061 "name": "Nvme1", 00:34:09.061 "trtype": "tcp", 00:34:09.061 "traddr": "10.0.0.2", 00:34:09.061 "adrfam": "ipv4", 00:34:09.061 "trsvcid": "4420", 00:34:09.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:09.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:09.061 "hdgst": false, 00:34:09.061 "ddgst": false 00:34:09.061 }, 00:34:09.061 "method": "bdev_nvme_attach_controller" 00:34:09.061 }' 00:34:09.061 [2024-07-15 20:39:47.582324] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:34:09.061 [2024-07-15 20:39:47.582393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12052 ] 00:34:09.319 EAL: No free 2048 kB hugepages reported on node 1 00:34:09.319 [2024-07-15 20:39:47.641983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.319 [2024-07-15 20:39:47.731273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.576 Running I/O for 1 seconds... 00:34:10.508 00:34:10.508 Latency(us) 00:34:10.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.508 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:10.508 Verification LBA range: start 0x0 length 0x4000 00:34:10.508 Nvme1n1 : 1.01 8555.18 33.42 0.00 0.00 14903.02 1820.44 14854.83 00:34:10.508 =================================================================================================================== 00:34:10.508 Total : 8555.18 33.42 0.00 0.00 14903.02 1820.44 14854.83 00:34:10.765 20:39:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=12195 00:34:10.765 20:39:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:10.766 20:39:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:10.766 20:39:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:10.766 20:39:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:10.766 20:39:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:10.766 20:39:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:10.766 20:39:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:10.766 { 00:34:10.766 "params": { 00:34:10.766 "name": "Nvme$subsystem", 00:34:10.766 "trtype": "$TEST_TRANSPORT", 00:34:10.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.766 "adrfam": "ipv4", 00:34:10.766 "trsvcid": "$NVMF_PORT", 00:34:10.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.766 "hdgst": ${hdgst:-false}, 00:34:10.766 "ddgst": ${ddgst:-false} 00:34:10.766 }, 00:34:10.766 "method": "bdev_nvme_attach_controller" 00:34:10.766 } 00:34:10.766 EOF 00:34:10.766 )") 00:34:10.766 20:39:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:10.766 20:39:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:10.766 20:39:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:10.766 20:39:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:10.766 "params": { 00:34:10.766 "name": "Nvme1", 00:34:10.766 "trtype": "tcp", 00:34:10.766 "traddr": "10.0.0.2", 00:34:10.766 "adrfam": "ipv4", 00:34:10.766 "trsvcid": "4420", 00:34:10.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:10.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:10.766 "hdgst": false, 00:34:10.766 "ddgst": false 00:34:10.766 }, 00:34:10.766 "method": "bdev_nvme_attach_controller" 00:34:10.766 }' 00:34:10.766 [2024-07-15 20:39:49.276327] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:34:10.766 [2024-07-15 20:39:49.276417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12195 ] 00:34:11.024 EAL: No free 2048 kB hugepages reported on node 1 00:34:11.024 [2024-07-15 20:39:49.335785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.024 [2024-07-15 20:39:49.423476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.288 Running I/O for 15 seconds... 00:34:13.812 20:39:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 11909 00:34:13.812 20:39:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:13.812 [2024-07-15 20:39:52.244146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.244983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.244999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.245972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.245986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.246001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.246014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.246029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.246043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.246058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.246072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.246087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.246101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.246117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.246130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.246146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.246180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.246199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.246214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.246231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.246246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.812 [2024-07-15 20:39:52.246262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.812 [2024-07-15 20:39:52.246277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.246974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.246989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.247006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.813 [2024-07-15 20:39:52.247036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.813 [2024-07-15 20:39:52.247944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.813 [2024-07-15 20:39:52.247959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.814 [2024-07-15 20:39:52.247972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.247987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.814 [2024-07-15 20:39:52.248000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.814 [2024-07-15 20:39:52.248451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061f0 is same with the state(5) to be set 00:34:13.814 [2024-07-15 20:39:52.248485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:13.814 [2024-07-15 20:39:52.248497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:13.814 [2024-07-15 20:39:52.248511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40440 len:8 PRP1 0x0 PRP2 0x0 00:34:13.814 [2024-07-15 20:39:52.248524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248586] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11061f0 was disconnected and freed. reset controller. 00:34:13.814 [2024-07-15 20:39:52.248660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.814 [2024-07-15 20:39:52.248684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.814 [2024-07-15 20:39:52.248716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.814 [2024-07-15 20:39:52.248746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.814 [2024-07-15 20:39:52.248781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.814 [2024-07-15 20:39:52.248797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:13.814 [2024-07-15 20:39:52.252629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.814 [2024-07-15 20:39:52.252680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:13.814 [2024-07-15 20:39:52.253421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-15 20:39:52.253464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:13.814 [2024-07-15 20:39:52.253483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:13.814 [2024-07-15 20:39:52.253722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:13.814 [2024-07-15 20:39:52.253987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.814 [2024-07-15 20:39:52.254010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.814 [2024-07-15 20:39:52.254027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.814 [2024-07-15 20:39:52.257609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.814 [2024-07-15 20:39:52.266903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.814 [2024-07-15 20:39:52.267339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-15 20:39:52.267372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:13.814 [2024-07-15 20:39:52.267390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:13.814 [2024-07-15 20:39:52.267628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:13.814 [2024-07-15 20:39:52.267871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.814 [2024-07-15 20:39:52.267907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.814 [2024-07-15 20:39:52.267923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.814 [2024-07-15 20:39:52.271484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.814 [2024-07-15 20:39:52.280857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.814 [2024-07-15 20:39:52.281324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-15 20:39:52.281367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:13.814 [2024-07-15 20:39:52.281384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:13.814 [2024-07-15 20:39:52.281623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:13.814 [2024-07-15 20:39:52.281866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.814 [2024-07-15 20:39:52.281901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.814 [2024-07-15 20:39:52.281917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.814 [2024-07-15 20:39:52.285496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.814 [2024-07-15 20:39:52.294789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.814 [2024-07-15 20:39:52.295269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-15 20:39:52.295300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:13.814 [2024-07-15 20:39:52.295318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:13.814 [2024-07-15 20:39:52.295556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:13.814 [2024-07-15 20:39:52.295799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.814 [2024-07-15 20:39:52.295823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.814 [2024-07-15 20:39:52.295838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.814 [2024-07-15 20:39:52.299420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.814 [2024-07-15 20:39:52.308701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.814 [2024-07-15 20:39:52.309158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-15 20:39:52.309189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:13.814 [2024-07-15 20:39:52.309207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:13.814 [2024-07-15 20:39:52.309445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:13.814 [2024-07-15 20:39:52.309687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.814 [2024-07-15 20:39:52.309711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.814 [2024-07-15 20:39:52.309726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.814 [2024-07-15 20:39:52.313310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.814 [2024-07-15 20:39:52.322591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.814 [2024-07-15 20:39:52.323038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-15 20:39:52.323069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:13.815 [2024-07-15 20:39:52.323087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:13.815 [2024-07-15 20:39:52.323324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:13.815 [2024-07-15 20:39:52.323567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.815 [2024-07-15 20:39:52.323590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.815 [2024-07-15 20:39:52.323606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.815 [2024-07-15 20:39:52.327187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.815 [2024-07-15 20:39:52.336469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.815 [2024-07-15 20:39:52.336926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-15 20:39:52.336959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:13.815 [2024-07-15 20:39:52.336976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:13.815 [2024-07-15 20:39:52.337223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:13.815 [2024-07-15 20:39:52.337467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.815 [2024-07-15 20:39:52.337491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.815 [2024-07-15 20:39:52.337506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.073 [2024-07-15 20:39:52.341091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.073 [2024-07-15 20:39:52.350379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.073 [2024-07-15 20:39:52.350833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.073 [2024-07-15 20:39:52.350864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.073 [2024-07-15 20:39:52.350890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.073 [2024-07-15 20:39:52.351130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.073 [2024-07-15 20:39:52.351372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.073 [2024-07-15 20:39:52.351396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.073 [2024-07-15 20:39:52.351412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.074 [2024-07-15 20:39:52.354991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.074 [2024-07-15 20:39:52.364290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.074 [2024-07-15 20:39:52.364749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.074 [2024-07-15 20:39:52.364776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.074 [2024-07-15 20:39:52.364791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.074 [2024-07-15 20:39:52.365051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.074 [2024-07-15 20:39:52.365295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.074 [2024-07-15 20:39:52.365318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.074 [2024-07-15 20:39:52.365334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.074 [2024-07-15 20:39:52.368918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.074 [2024-07-15 20:39:52.378205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.074 [2024-07-15 20:39:52.378721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.074 [2024-07-15 20:39:52.378748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.074 [2024-07-15 20:39:52.378763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.074 [2024-07-15 20:39:52.379031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.074 [2024-07-15 20:39:52.379275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.074 [2024-07-15 20:39:52.379299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.074 [2024-07-15 20:39:52.379320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.074 [2024-07-15 20:39:52.382900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.074 [2024-07-15 20:39:52.392192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.074 [2024-07-15 20:39:52.392645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.074 [2024-07-15 20:39:52.392676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.074 [2024-07-15 20:39:52.392694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.074 [2024-07-15 20:39:52.392943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.074 [2024-07-15 20:39:52.393187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.074 [2024-07-15 20:39:52.393212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.074 [2024-07-15 20:39:52.393227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.074 [2024-07-15 20:39:52.396800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.074 [2024-07-15 20:39:52.406087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.074 [2024-07-15 20:39:52.406543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.074 [2024-07-15 20:39:52.406574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.074 [2024-07-15 20:39:52.406592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.074 [2024-07-15 20:39:52.406830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.074 [2024-07-15 20:39:52.407083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.074 [2024-07-15 20:39:52.407109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.074 [2024-07-15 20:39:52.407124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.074 [2024-07-15 20:39:52.410700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.074 [2024-07-15 20:39:52.419995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.074 [2024-07-15 20:39:52.420451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.074 [2024-07-15 20:39:52.420481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.074 [2024-07-15 20:39:52.420499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.074 [2024-07-15 20:39:52.420736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.074 [2024-07-15 20:39:52.420991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.074 [2024-07-15 20:39:52.421016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.074 [2024-07-15 20:39:52.421031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.074 [2024-07-15 20:39:52.424603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.074 [2024-07-15 20:39:52.433891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.074 [2024-07-15 20:39:52.434360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.074 [2024-07-15 20:39:52.434391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.074 [2024-07-15 20:39:52.434409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.074 [2024-07-15 20:39:52.434647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.074 [2024-07-15 20:39:52.434901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.074 [2024-07-15 20:39:52.434925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.074 [2024-07-15 20:39:52.434941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.074 [2024-07-15 20:39:52.438512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.074 [2024-07-15 20:39:52.447796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.074 [2024-07-15 20:39:52.448253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.074 [2024-07-15 20:39:52.448284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.074 [2024-07-15 20:39:52.448302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.074 [2024-07-15 20:39:52.448540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.074 [2024-07-15 20:39:52.448783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.074 [2024-07-15 20:39:52.448806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.074 [2024-07-15 20:39:52.448821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.074 [2024-07-15 20:39:52.452408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.074 [2024-07-15 20:39:52.461699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.074 [2024-07-15 20:39:52.462236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.074 [2024-07-15 20:39:52.462286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.074 [2024-07-15 20:39:52.462304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.074 [2024-07-15 20:39:52.462542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.074 [2024-07-15 20:39:52.462785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.074 [2024-07-15 20:39:52.462809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.074 [2024-07-15 20:39:52.462825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.074 [2024-07-15 20:39:52.466415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.074 [2024-07-15 20:39:52.475701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.074 [2024-07-15 20:39:52.476151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.074 [2024-07-15 20:39:52.476177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.074 [2024-07-15 20:39:52.476197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.074 [2024-07-15 20:39:52.476436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.074 [2024-07-15 20:39:52.476679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.074 [2024-07-15 20:39:52.476703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.074 [2024-07-15 20:39:52.476719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.074 [2024-07-15 20:39:52.480317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.074 [2024-07-15 20:39:52.489645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.074 [2024-07-15 20:39:52.490111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.074 [2024-07-15 20:39:52.490144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.074 [2024-07-15 20:39:52.490162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.074 [2024-07-15 20:39:52.490401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.074 [2024-07-15 20:39:52.490644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.074 [2024-07-15 20:39:52.490668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.074 [2024-07-15 20:39:52.490684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.074 [2024-07-15 20:39:52.494242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.074 [2024-07-15 20:39:52.503544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.074 [2024-07-15 20:39:52.504002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.074 [2024-07-15 20:39:52.504034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.074 [2024-07-15 20:39:52.504052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.074 [2024-07-15 20:39:52.504290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.074 [2024-07-15 20:39:52.504533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.074 [2024-07-15 20:39:52.504557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.075 [2024-07-15 20:39:52.504573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.075 [2024-07-15 20:39:52.508165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.075 [2024-07-15 20:39:52.517461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.075 [2024-07-15 20:39:52.517921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.075 [2024-07-15 20:39:52.517953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.075 [2024-07-15 20:39:52.517971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.075 [2024-07-15 20:39:52.518209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.075 [2024-07-15 20:39:52.518452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.075 [2024-07-15 20:39:52.518487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.075 [2024-07-15 20:39:52.518508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.075 [2024-07-15 20:39:52.522105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.075 [2024-07-15 20:39:52.531408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.075 [2024-07-15 20:39:52.531857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.075 [2024-07-15 20:39:52.531904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.075 [2024-07-15 20:39:52.531923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.075 [2024-07-15 20:39:52.532161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.075 [2024-07-15 20:39:52.532404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.075 [2024-07-15 20:39:52.532428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.075 [2024-07-15 20:39:52.532443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.075 [2024-07-15 20:39:52.536024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.075 [2024-07-15 20:39:52.545315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.075 [2024-07-15 20:39:52.545769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.075 [2024-07-15 20:39:52.545800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.075 [2024-07-15 20:39:52.545817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.075 [2024-07-15 20:39:52.546064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.075 [2024-07-15 20:39:52.546308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.075 [2024-07-15 20:39:52.546332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.075 [2024-07-15 20:39:52.546348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.075 [2024-07-15 20:39:52.549931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.075 [2024-07-15 20:39:52.559217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.075 [2024-07-15 20:39:52.559681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.075 [2024-07-15 20:39:52.559708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.075 [2024-07-15 20:39:52.559731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.075 [2024-07-15 20:39:52.560000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.075 [2024-07-15 20:39:52.560244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.075 [2024-07-15 20:39:52.560268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.075 [2024-07-15 20:39:52.560284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.075 [2024-07-15 20:39:52.563856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.075 [2024-07-15 20:39:52.573152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.075 [2024-07-15 20:39:52.573603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.075 [2024-07-15 20:39:52.573640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.075 [2024-07-15 20:39:52.573659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.075 [2024-07-15 20:39:52.573909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.075 [2024-07-15 20:39:52.574153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.075 [2024-07-15 20:39:52.574177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.075 [2024-07-15 20:39:52.574193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.075 [2024-07-15 20:39:52.577768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.075 [2024-07-15 20:39:52.587060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.075 [2024-07-15 20:39:52.587511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.075 [2024-07-15 20:39:52.587542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.075 [2024-07-15 20:39:52.587559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.075 [2024-07-15 20:39:52.587798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.075 [2024-07-15 20:39:52.588052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.075 [2024-07-15 20:39:52.588077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.075 [2024-07-15 20:39:52.588092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.075 [2024-07-15 20:39:52.591664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.075 [2024-07-15 20:39:52.600955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.075 [2024-07-15 20:39:52.601400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.075 [2024-07-15 20:39:52.601430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.075 [2024-07-15 20:39:52.601447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.075 [2024-07-15 20:39:52.601685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.075 [2024-07-15 20:39:52.601940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.075 [2024-07-15 20:39:52.601964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.075 [2024-07-15 20:39:52.601980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.334 [2024-07-15 20:39:52.605555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.334 [2024-07-15 20:39:52.614842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.334 [2024-07-15 20:39:52.615261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.334 [2024-07-15 20:39:52.615292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.334 [2024-07-15 20:39:52.615318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.334 [2024-07-15 20:39:52.615556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.334 [2024-07-15 20:39:52.615804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.334 [2024-07-15 20:39:52.615828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.334 [2024-07-15 20:39:52.615843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.334 [2024-07-15 20:39:52.619425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.334 [2024-07-15 20:39:52.628706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.334 [2024-07-15 20:39:52.629162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.334 [2024-07-15 20:39:52.629194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.334 [2024-07-15 20:39:52.629211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.334 [2024-07-15 20:39:52.629450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.334 [2024-07-15 20:39:52.629693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.334 [2024-07-15 20:39:52.629717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.334 [2024-07-15 20:39:52.629732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.334 [2024-07-15 20:39:52.633316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.334 [2024-07-15 20:39:52.642600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.334 [2024-07-15 20:39:52.643064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.334 [2024-07-15 20:39:52.643096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.334 [2024-07-15 20:39:52.643113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.334 [2024-07-15 20:39:52.643351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.334 [2024-07-15 20:39:52.643595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.334 [2024-07-15 20:39:52.643619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.334 [2024-07-15 20:39:52.643634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.334 [2024-07-15 20:39:52.647217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.334 [2024-07-15 20:39:52.656509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.334 [2024-07-15 20:39:52.656959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.334 [2024-07-15 20:39:52.656991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.334 [2024-07-15 20:39:52.657008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.334 [2024-07-15 20:39:52.657247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.334 [2024-07-15 20:39:52.657489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.334 [2024-07-15 20:39:52.657513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.334 [2024-07-15 20:39:52.657529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.334 [2024-07-15 20:39:52.661121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.334 [2024-07-15 20:39:52.670396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.334 [2024-07-15 20:39:52.670844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.334 [2024-07-15 20:39:52.670892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.334 [2024-07-15 20:39:52.670912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.334 [2024-07-15 20:39:52.671150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.334 [2024-07-15 20:39:52.671393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.334 [2024-07-15 20:39:52.671417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.334 [2024-07-15 20:39:52.671432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.334 [2024-07-15 20:39:52.675013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.334 [2024-07-15 20:39:52.684294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.334 [2024-07-15 20:39:52.684719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.334 [2024-07-15 20:39:52.684750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.334 [2024-07-15 20:39:52.684768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.334 [2024-07-15 20:39:52.685022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.334 [2024-07-15 20:39:52.685265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.334 [2024-07-15 20:39:52.685289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.334 [2024-07-15 20:39:52.685305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.334 [2024-07-15 20:39:52.688923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.334 [2024-07-15 20:39:52.698211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.334 [2024-07-15 20:39:52.698668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.334 [2024-07-15 20:39:52.698695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.335 [2024-07-15 20:39:52.698712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.335 [2024-07-15 20:39:52.698966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.335 [2024-07-15 20:39:52.699210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.335 [2024-07-15 20:39:52.699234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.335 [2024-07-15 20:39:52.699249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.335 [2024-07-15 20:39:52.702824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.335 [2024-07-15 20:39:52.712116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.335 [2024-07-15 20:39:52.712569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.335 [2024-07-15 20:39:52.712600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.335 [2024-07-15 20:39:52.712626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.335 [2024-07-15 20:39:52.712865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.335 [2024-07-15 20:39:52.713118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.335 [2024-07-15 20:39:52.713143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.335 [2024-07-15 20:39:52.713158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.335 [2024-07-15 20:39:52.716729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.335 [2024-07-15 20:39:52.726022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.335 [2024-07-15 20:39:52.726471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.335 [2024-07-15 20:39:52.726511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.335 [2024-07-15 20:39:52.726528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.335 [2024-07-15 20:39:52.726766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.335 [2024-07-15 20:39:52.727020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.335 [2024-07-15 20:39:52.727045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.335 [2024-07-15 20:39:52.727061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.335 [2024-07-15 20:39:52.730631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.335 [2024-07-15 20:39:52.739919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.335 [2024-07-15 20:39:52.740365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.335 [2024-07-15 20:39:52.740396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.335 [2024-07-15 20:39:52.740414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.335 [2024-07-15 20:39:52.740652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.335 [2024-07-15 20:39:52.740905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.335 [2024-07-15 20:39:52.740929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.335 [2024-07-15 20:39:52.740945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.335 [2024-07-15 20:39:52.744517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.335 [2024-07-15 20:39:52.753803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.335 [2024-07-15 20:39:52.754277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.335 [2024-07-15 20:39:52.754309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.335 [2024-07-15 20:39:52.754326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.335 [2024-07-15 20:39:52.754565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.335 [2024-07-15 20:39:52.754807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.335 [2024-07-15 20:39:52.754836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.335 [2024-07-15 20:39:52.754853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.335 [2024-07-15 20:39:52.758440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.335 [2024-07-15 20:39:52.767728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.335 [2024-07-15 20:39:52.768186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.335 [2024-07-15 20:39:52.768228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.335 [2024-07-15 20:39:52.768246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.335 [2024-07-15 20:39:52.768484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.335 [2024-07-15 20:39:52.768727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.335 [2024-07-15 20:39:52.768754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.335 [2024-07-15 20:39:52.768769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.335 [2024-07-15 20:39:52.772323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.335 [2024-07-15 20:39:52.781726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.335 [2024-07-15 20:39:52.782177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.335 [2024-07-15 20:39:52.782212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.335 [2024-07-15 20:39:52.782229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.335 [2024-07-15 20:39:52.782467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.335 [2024-07-15 20:39:52.782709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.335 [2024-07-15 20:39:52.782733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.335 [2024-07-15 20:39:52.782748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.335 [2024-07-15 20:39:52.786345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.335 [2024-07-15 20:39:52.795646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.335 [2024-07-15 20:39:52.796101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.335 [2024-07-15 20:39:52.796129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.335 [2024-07-15 20:39:52.796145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.335 [2024-07-15 20:39:52.796396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.335 [2024-07-15 20:39:52.796639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.335 [2024-07-15 20:39:52.796663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.335 [2024-07-15 20:39:52.796679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.335 [2024-07-15 20:39:52.800248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.335 [2024-07-15 20:39:52.809481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.335 [2024-07-15 20:39:52.809946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.335 [2024-07-15 20:39:52.809975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.335 [2024-07-15 20:39:52.810001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.335 [2024-07-15 20:39:52.810253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.335 [2024-07-15 20:39:52.810496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.335 [2024-07-15 20:39:52.810520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.335 [2024-07-15 20:39:52.810535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.335 [2024-07-15 20:39:52.814121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.335 [2024-07-15 20:39:52.823412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.335 [2024-07-15 20:39:52.823863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.335 [2024-07-15 20:39:52.823900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.335 [2024-07-15 20:39:52.823918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.335 [2024-07-15 20:39:52.824157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.335 [2024-07-15 20:39:52.824399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.335 [2024-07-15 20:39:52.824424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.335 [2024-07-15 20:39:52.824439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.335 [2024-07-15 20:39:52.828025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.335 [2024-07-15 20:39:52.837314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.335 [2024-07-15 20:39:52.837831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.335 [2024-07-15 20:39:52.837871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.335 [2024-07-15 20:39:52.837894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.335 [2024-07-15 20:39:52.838153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.335 [2024-07-15 20:39:52.838397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.335 [2024-07-15 20:39:52.838421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.335 [2024-07-15 20:39:52.838436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.335 [2024-07-15 20:39:52.842017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.335 [2024-07-15 20:39:52.851327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.335 [2024-07-15 20:39:52.851782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.335 [2024-07-15 20:39:52.851813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.336 [2024-07-15 20:39:52.851831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.336 [2024-07-15 20:39:52.852084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.336 [2024-07-15 20:39:52.852328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.336 [2024-07-15 20:39:52.852352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.336 [2024-07-15 20:39:52.852367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.336 [2024-07-15 20:39:52.855948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.594 [2024-07-15 20:39:52.865235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.594 [2024-07-15 20:39:52.865682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.594 [2024-07-15 20:39:52.865713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.594 [2024-07-15 20:39:52.865731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.594 [2024-07-15 20:39:52.865979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.594 [2024-07-15 20:39:52.866222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.594 [2024-07-15 20:39:52.866246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.594 [2024-07-15 20:39:52.866261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.594 [2024-07-15 20:39:52.869836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.594 [2024-07-15 20:39:52.879145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.594 [2024-07-15 20:39:52.879602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.594 [2024-07-15 20:39:52.879629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.594 [2024-07-15 20:39:52.879646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.594 [2024-07-15 20:39:52.879906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.594 [2024-07-15 20:39:52.880129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.594 [2024-07-15 20:39:52.880150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.594 [2024-07-15 20:39:52.880179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.594 [2024-07-15 20:39:52.883767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.594 [2024-07-15 20:39:52.893082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.594 [2024-07-15 20:39:52.893533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.594 [2024-07-15 20:39:52.893565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.594 [2024-07-15 20:39:52.893582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.594 [2024-07-15 20:39:52.893820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.594 [2024-07-15 20:39:52.894074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.594 [2024-07-15 20:39:52.894099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.594 [2024-07-15 20:39:52.894120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.594 [2024-07-15 20:39:52.897744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.594 [2024-07-15 20:39:52.907053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.594 [2024-07-15 20:39:52.907519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.594 [2024-07-15 20:39:52.907548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.594 [2024-07-15 20:39:52.907564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.594 [2024-07-15 20:39:52.907820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.594 [2024-07-15 20:39:52.908074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.594 [2024-07-15 20:39:52.908100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.594 [2024-07-15 20:39:52.908116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.594 [2024-07-15 20:39:52.911693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.594 [2024-07-15 20:39:52.921015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.594 [2024-07-15 20:39:52.921465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.594 [2024-07-15 20:39:52.921495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.594 [2024-07-15 20:39:52.921513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.594 [2024-07-15 20:39:52.921751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.594 [2024-07-15 20:39:52.922005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.594 [2024-07-15 20:39:52.922031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.594 [2024-07-15 20:39:52.922047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.594 [2024-07-15 20:39:52.925626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.594 [2024-07-15 20:39:52.934939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.594 [2024-07-15 20:39:52.935383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.594 [2024-07-15 20:39:52.935414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.594 [2024-07-15 20:39:52.935432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.594 [2024-07-15 20:39:52.935670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.594 [2024-07-15 20:39:52.935925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.594 [2024-07-15 20:39:52.935950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.594 [2024-07-15 20:39:52.935965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.594 [2024-07-15 20:39:52.939542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.594 [2024-07-15 20:39:52.948846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.594 [2024-07-15 20:39:52.949308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.594 [2024-07-15 20:39:52.949339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.594 [2024-07-15 20:39:52.949357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.594 [2024-07-15 20:39:52.949595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.594 [2024-07-15 20:39:52.949839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.594 [2024-07-15 20:39:52.949863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.594 [2024-07-15 20:39:52.949889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.594 [2024-07-15 20:39:52.953469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.594 [2024-07-15 20:39:52.962764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.594 [2024-07-15 20:39:52.963223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.594 [2024-07-15 20:39:52.963255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.594 [2024-07-15 20:39:52.963273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.594 [2024-07-15 20:39:52.963511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.594 [2024-07-15 20:39:52.963753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.594 [2024-07-15 20:39:52.963778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.594 [2024-07-15 20:39:52.963794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.594 [2024-07-15 20:39:52.967382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.594 [2024-07-15 20:39:52.976699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.594 [2024-07-15 20:39:52.977141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.594 [2024-07-15 20:39:52.977173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.594 [2024-07-15 20:39:52.977191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.594 [2024-07-15 20:39:52.977429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.594 [2024-07-15 20:39:52.977673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.594 [2024-07-15 20:39:52.977697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.594 [2024-07-15 20:39:52.977712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.594 [2024-07-15 20:39:52.981294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.594 [2024-07-15 20:39:52.990592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.594 [2024-07-15 20:39:52.991064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.594 [2024-07-15 20:39:52.991096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.594 [2024-07-15 20:39:52.991115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.594 [2024-07-15 20:39:52.991368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.594 [2024-07-15 20:39:52.991611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.594 [2024-07-15 20:39:52.991636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.594 [2024-07-15 20:39:52.991651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.594 [2024-07-15 20:39:52.995234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.594 [2024-07-15 20:39:53.004532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.594 [2024-07-15 20:39:53.004998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.594 [2024-07-15 20:39:53.005030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.594 [2024-07-15 20:39:53.005048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.594 [2024-07-15 20:39:53.005288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.594 [2024-07-15 20:39:53.005531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.594 [2024-07-15 20:39:53.005556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.594 [2024-07-15 20:39:53.005571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.594 [2024-07-15 20:39:53.009161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.594 [2024-07-15 20:39:53.018459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.594 [2024-07-15 20:39:53.018917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.594 [2024-07-15 20:39:53.018949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.594 [2024-07-15 20:39:53.018967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.594 [2024-07-15 20:39:53.019206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.594 [2024-07-15 20:39:53.019450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.594 [2024-07-15 20:39:53.019474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.594 [2024-07-15 20:39:53.019490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.594 [2024-07-15 20:39:53.023047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.594 [2024-07-15 20:39:53.032194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.594 [2024-07-15 20:39:53.032616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.594 [2024-07-15 20:39:53.032647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.594 [2024-07-15 20:39:53.032665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.594 [2024-07-15 20:39:53.032927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.594 [2024-07-15 20:39:53.033147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.594 [2024-07-15 20:39:53.033185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.594 [2024-07-15 20:39:53.033209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.594 [2024-07-15 20:39:53.036814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.595 [2024-07-15 20:39:53.046211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.595 [2024-07-15 20:39:53.046648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.595 [2024-07-15 20:39:53.046692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.595 [2024-07-15 20:39:53.046710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.595 [2024-07-15 20:39:53.046969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.595 [2024-07-15 20:39:53.047227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.595 [2024-07-15 20:39:53.047252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.595 [2024-07-15 20:39:53.047267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.595 [2024-07-15 20:39:53.050845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.595 [2024-07-15 20:39:53.060168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.595 [2024-07-15 20:39:53.060623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.595 [2024-07-15 20:39:53.060655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.595 [2024-07-15 20:39:53.060673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.595 [2024-07-15 20:39:53.060936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.595 [2024-07-15 20:39:53.061137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.595 [2024-07-15 20:39:53.061173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.595 [2024-07-15 20:39:53.061185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.595 [2024-07-15 20:39:53.064731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.595 [2024-07-15 20:39:53.074179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.595 [2024-07-15 20:39:53.074725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.595 [2024-07-15 20:39:53.074774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.595 [2024-07-15 20:39:53.074792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.595 [2024-07-15 20:39:53.075053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.595 [2024-07-15 20:39:53.075295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.595 [2024-07-15 20:39:53.075320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.595 [2024-07-15 20:39:53.075336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.595 [2024-07-15 20:39:53.078924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.595 [2024-07-15 20:39:53.088226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.595 [2024-07-15 20:39:53.088680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.595 [2024-07-15 20:39:53.088712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.595 [2024-07-15 20:39:53.088728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.595 [2024-07-15 20:39:53.088986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.595 [2024-07-15 20:39:53.089230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.595 [2024-07-15 20:39:53.089254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.595 [2024-07-15 20:39:53.089270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.595 [2024-07-15 20:39:53.092847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.595 [2024-07-15 20:39:53.102168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.595 [2024-07-15 20:39:53.102699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.595 [2024-07-15 20:39:53.102746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.595 [2024-07-15 20:39:53.102763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.595 [2024-07-15 20:39:53.103013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.595 [2024-07-15 20:39:53.103256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.595 [2024-07-15 20:39:53.103280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.595 [2024-07-15 20:39:53.103296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.595 [2024-07-15 20:39:53.107016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.595 [2024-07-15 20:39:53.116111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.595 [2024-07-15 20:39:53.116565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.595 [2024-07-15 20:39:53.116597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.595 [2024-07-15 20:39:53.116615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.595 [2024-07-15 20:39:53.116853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.595 [2024-07-15 20:39:53.117107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.595 [2024-07-15 20:39:53.117133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.595 [2024-07-15 20:39:53.117149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.595 [2024-07-15 20:39:53.120723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.853 [2024-07-15 20:39:53.130027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.853 [2024-07-15 20:39:53.130487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-15 20:39:53.130519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.853 [2024-07-15 20:39:53.130537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.853 [2024-07-15 20:39:53.130775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.853 [2024-07-15 20:39:53.131034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.853 [2024-07-15 20:39:53.131060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.853 [2024-07-15 20:39:53.131075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.853 [2024-07-15 20:39:53.134653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.853 [2024-07-15 20:39:53.143955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.853 [2024-07-15 20:39:53.144407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-15 20:39:53.144438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.853 [2024-07-15 20:39:53.144456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.853 [2024-07-15 20:39:53.144694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.853 [2024-07-15 20:39:53.144949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.853 [2024-07-15 20:39:53.144975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.853 [2024-07-15 20:39:53.144990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.853 [2024-07-15 20:39:53.148567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.853 [2024-07-15 20:39:53.157857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.853 [2024-07-15 20:39:53.158400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-15 20:39:53.158449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.853 [2024-07-15 20:39:53.158466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.853 [2024-07-15 20:39:53.158704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.853 [2024-07-15 20:39:53.158957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.853 [2024-07-15 20:39:53.158983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.853 [2024-07-15 20:39:53.158997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.853 [2024-07-15 20:39:53.162577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.853 [2024-07-15 20:39:53.171872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.853 [2024-07-15 20:39:53.172340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-15 20:39:53.172371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.853 [2024-07-15 20:39:53.172389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.853 [2024-07-15 20:39:53.172627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.853 [2024-07-15 20:39:53.172869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.853 [2024-07-15 20:39:53.172903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.853 [2024-07-15 20:39:53.172920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.853 [2024-07-15 20:39:53.176509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.853 [2024-07-15 20:39:53.185803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.853 [2024-07-15 20:39:53.186286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-15 20:39:53.186318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.853 [2024-07-15 20:39:53.186336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.853 [2024-07-15 20:39:53.186574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.853 [2024-07-15 20:39:53.186816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.853 [2024-07-15 20:39:53.186841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.853 [2024-07-15 20:39:53.186856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.853 [2024-07-15 20:39:53.190438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.853 [2024-07-15 20:39:53.199737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.853 [2024-07-15 20:39:53.200197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-15 20:39:53.200229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.853 [2024-07-15 20:39:53.200247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.853 [2024-07-15 20:39:53.200485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.853 [2024-07-15 20:39:53.200727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.853 [2024-07-15 20:39:53.200751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.853 [2024-07-15 20:39:53.200766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.853 [2024-07-15 20:39:53.204351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.853 [2024-07-15 20:39:53.213635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.853 [2024-07-15 20:39:53.214101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-15 20:39:53.214132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.853 [2024-07-15 20:39:53.214150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.853 [2024-07-15 20:39:53.214388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.853 [2024-07-15 20:39:53.214630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.853 [2024-07-15 20:39:53.214655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.853 [2024-07-15 20:39:53.214671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.853 [2024-07-15 20:39:53.218259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.853 [2024-07-15 20:39:53.227556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.853 [2024-07-15 20:39:53.227986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-15 20:39:53.228019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.853 [2024-07-15 20:39:53.228043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.853 [2024-07-15 20:39:53.228283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.853 [2024-07-15 20:39:53.228526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.853 [2024-07-15 20:39:53.228550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.853 [2024-07-15 20:39:53.228566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.853 [2024-07-15 20:39:53.232165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.853 [2024-07-15 20:39:53.241460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.853 [2024-07-15 20:39:53.241918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-15 20:39:53.241950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.853 [2024-07-15 20:39:53.241968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.854 [2024-07-15 20:39:53.242207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.854 [2024-07-15 20:39:53.242449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.854 [2024-07-15 20:39:53.242474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.854 [2024-07-15 20:39:53.242490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.854 [2024-07-15 20:39:53.246074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.854 [2024-07-15 20:39:53.255379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.854 [2024-07-15 20:39:53.255965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-15 20:39:53.255997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.854 [2024-07-15 20:39:53.256015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.854 [2024-07-15 20:39:53.256255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.854 [2024-07-15 20:39:53.256499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.854 [2024-07-15 20:39:53.256524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.854 [2024-07-15 20:39:53.256540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.854 [2024-07-15 20:39:53.260133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.854 [2024-07-15 20:39:53.269436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.854 [2024-07-15 20:39:53.269890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-15 20:39:53.269921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.854 [2024-07-15 20:39:53.269939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.854 [2024-07-15 20:39:53.270178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.854 [2024-07-15 20:39:53.270419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.854 [2024-07-15 20:39:53.270449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.854 [2024-07-15 20:39:53.270465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.854 [2024-07-15 20:39:53.274141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.854 [2024-07-15 20:39:53.283433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.854 [2024-07-15 20:39:53.283894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-15 20:39:53.283926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.854 [2024-07-15 20:39:53.283944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.854 [2024-07-15 20:39:53.284182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.854 [2024-07-15 20:39:53.284425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.854 [2024-07-15 20:39:53.284449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.854 [2024-07-15 20:39:53.284465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.854 [2024-07-15 20:39:53.288059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.854 [2024-07-15 20:39:53.297358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.854 [2024-07-15 20:39:53.297921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-15 20:39:53.297954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.854 [2024-07-15 20:39:53.297972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.854 [2024-07-15 20:39:53.298211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.854 [2024-07-15 20:39:53.298453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.854 [2024-07-15 20:39:53.298478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.854 [2024-07-15 20:39:53.298494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.854 [2024-07-15 20:39:53.302085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.854 [2024-07-15 20:39:53.311385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.854 [2024-07-15 20:39:53.311832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-15 20:39:53.311864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.854 [2024-07-15 20:39:53.311893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.854 [2024-07-15 20:39:53.312134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.854 [2024-07-15 20:39:53.312378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.854 [2024-07-15 20:39:53.312403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.854 [2024-07-15 20:39:53.312419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.854 [2024-07-15 20:39:53.316047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.854 [2024-07-15 20:39:53.325349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.854 [2024-07-15 20:39:53.325802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-15 20:39:53.325834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.854 [2024-07-15 20:39:53.325852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.854 [2024-07-15 20:39:53.326103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.854 [2024-07-15 20:39:53.326346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.854 [2024-07-15 20:39:53.326371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.854 [2024-07-15 20:39:53.326387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.854 [2024-07-15 20:39:53.329973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.854 [2024-07-15 20:39:53.339268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.854 [2024-07-15 20:39:53.339720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-15 20:39:53.339751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.854 [2024-07-15 20:39:53.339769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.854 [2024-07-15 20:39:53.340020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.854 [2024-07-15 20:39:53.340263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.854 [2024-07-15 20:39:53.340287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.854 [2024-07-15 20:39:53.340303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.854 [2024-07-15 20:39:53.343888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.854 [2024-07-15 20:39:53.353187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.854 [2024-07-15 20:39:53.353614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-15 20:39:53.353646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.854 [2024-07-15 20:39:53.353663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.854 [2024-07-15 20:39:53.353912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.854 [2024-07-15 20:39:53.354154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.854 [2024-07-15 20:39:53.354179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.854 [2024-07-15 20:39:53.354195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.854 [2024-07-15 20:39:53.357776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.854 [2024-07-15 20:39:53.367113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.854 [2024-07-15 20:39:53.367566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-15 20:39:53.367598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.854 [2024-07-15 20:39:53.367616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.854 [2024-07-15 20:39:53.367861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:14.854 [2024-07-15 20:39:53.368118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.854 [2024-07-15 20:39:53.368144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.854 [2024-07-15 20:39:53.368159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.854 [2024-07-15 20:39:53.371739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.854 [2024-07-15 20:39:53.381047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.854 [2024-07-15 20:39:53.381470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-15 20:39:53.381502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:14.854 [2024-07-15 20:39:53.381520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:14.854 [2024-07-15 20:39:53.381758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.111 [2024-07-15 20:39:53.382015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.111 [2024-07-15 20:39:53.382041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.111 [2024-07-15 20:39:53.382057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.111 [2024-07-15 20:39:53.385637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.111 [2024-07-15 20:39:53.394946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.111 [2024-07-15 20:39:53.395411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.111 [2024-07-15 20:39:53.395442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.111 [2024-07-15 20:39:53.395460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.111 [2024-07-15 20:39:53.395698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.111 [2024-07-15 20:39:53.395954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.111 [2024-07-15 20:39:53.395980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.111 [2024-07-15 20:39:53.395996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.111 [2024-07-15 20:39:53.399575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.111 [2024-07-15 20:39:53.408865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.111 [2024-07-15 20:39:53.409333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.111 [2024-07-15 20:39:53.409365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.111 [2024-07-15 20:39:53.409382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.409621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.409863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.409901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.409924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.413503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.422792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.423251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.423282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.423300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.423538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.423780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.423804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.423819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.427412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.436708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.437168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.437200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.437218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.437457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.437701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.437726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.437741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.441333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.450629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.451060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.451092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.451110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.451349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.451592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.451617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.451633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.455226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.464516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.464969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.465001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.465018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.465257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.465499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.465524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.465540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.469126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.478428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.478887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.478920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.478937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.479176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.479418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.479443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.479458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.483040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.492332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.492784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.492816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.492833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.493085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.493328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.493353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.493369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.496952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.506278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.506852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.506929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.506947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.507186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.507433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.507459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.507475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.511069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.520166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.520792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.520844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.520862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.521147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.521391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.521416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.521433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.525022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.534114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.534723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.534778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.534796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.535043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.535286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.535311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.535327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.538918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.548004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.548429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.548461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.548479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.548717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.548974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.548999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.549015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.552599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.561899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.562348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.562380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.562398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.562636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.562892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.562917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.562933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.566512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.575799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.576234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.576266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.576283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.576521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.576763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.576788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.576803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.580394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.589692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.590098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.590130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.590148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.590386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.590628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.590652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.590667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.594258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.603549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.604003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.604037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.604061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.604302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.604544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.604569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.604585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.608173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.617467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.617895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.617927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.617945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.618183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.618425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.618450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.618466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.622051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.112 [2024-07-15 20:39:53.631344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.112 [2024-07-15 20:39:53.631795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.112 [2024-07-15 20:39:53.631826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.112 [2024-07-15 20:39:53.631844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.112 [2024-07-15 20:39:53.632093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.112 [2024-07-15 20:39:53.632336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.112 [2024-07-15 20:39:53.632361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.112 [2024-07-15 20:39:53.632377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.112 [2024-07-15 20:39:53.635959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.369 [2024-07-15 20:39:53.645253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.369 [2024-07-15 20:39:53.645712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.369 [2024-07-15 20:39:53.645743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.369 [2024-07-15 20:39:53.645760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.369 [2024-07-15 20:39:53.646011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.369 [2024-07-15 20:39:53.646259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.369 [2024-07-15 20:39:53.646284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.369 [2024-07-15 20:39:53.646299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.369 [2024-07-15 20:39:53.649886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.369 [2024-07-15 20:39:53.659177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.369 [2024-07-15 20:39:53.659630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.369 [2024-07-15 20:39:53.659662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.369 [2024-07-15 20:39:53.659681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.369 [2024-07-15 20:39:53.659932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.369 [2024-07-15 20:39:53.660174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.370 [2024-07-15 20:39:53.660200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.370 [2024-07-15 20:39:53.660215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.370 [2024-07-15 20:39:53.663792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.370 [2024-07-15 20:39:53.673093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.370 [2024-07-15 20:39:53.673541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.370 [2024-07-15 20:39:53.673572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.370 [2024-07-15 20:39:53.673590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.370 [2024-07-15 20:39:53.673828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.370 [2024-07-15 20:39:53.674085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.370 [2024-07-15 20:39:53.674111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.370 [2024-07-15 20:39:53.674127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.370 [2024-07-15 20:39:53.677704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.370 [2024-07-15 20:39:53.687011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.370 [2024-07-15 20:39:53.687470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.370 [2024-07-15 20:39:53.687501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.370 [2024-07-15 20:39:53.687519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.370 [2024-07-15 20:39:53.687757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.370 [2024-07-15 20:39:53.688013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.370 [2024-07-15 20:39:53.688038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.370 [2024-07-15 20:39:53.688054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.370 [2024-07-15 20:39:53.691629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.370 [2024-07-15 20:39:53.700934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.370 [2024-07-15 20:39:53.701510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.370 [2024-07-15 20:39:53.701564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.370 [2024-07-15 20:39:53.701581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.370 [2024-07-15 20:39:53.701820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.370 [2024-07-15 20:39:53.702076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.370 [2024-07-15 20:39:53.702102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.370 [2024-07-15 20:39:53.702119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.370 [2024-07-15 20:39:53.705697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.370 [2024-07-15 20:39:53.714798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.370 [2024-07-15 20:39:53.715262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.370 [2024-07-15 20:39:53.715294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.370 [2024-07-15 20:39:53.715311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.370 [2024-07-15 20:39:53.715550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.370 [2024-07-15 20:39:53.715791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.370 [2024-07-15 20:39:53.715816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.370 [2024-07-15 20:39:53.715831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.370 [2024-07-15 20:39:53.719421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.370 [2024-07-15 20:39:53.728715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.370 [2024-07-15 20:39:53.729227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.370 [2024-07-15 20:39:53.729260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.370 [2024-07-15 20:39:53.729278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.370 [2024-07-15 20:39:53.729517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.370 [2024-07-15 20:39:53.729760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.370 [2024-07-15 20:39:53.729785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.370 [2024-07-15 20:39:53.729801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.370 [2024-07-15 20:39:53.733392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.370 [2024-07-15 20:39:53.742686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.370 [2024-07-15 20:39:53.743148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.370 [2024-07-15 20:39:53.743180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.370 [2024-07-15 20:39:53.743206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.370 [2024-07-15 20:39:53.743446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.370 [2024-07-15 20:39:53.743688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.370 [2024-07-15 20:39:53.743714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.370 [2024-07-15 20:39:53.743730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.370 [2024-07-15 20:39:53.747320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.370 [2024-07-15 20:39:53.756631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.370 [2024-07-15 20:39:53.757089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.370 [2024-07-15 20:39:53.757121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.370 [2024-07-15 20:39:53.757139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.370 [2024-07-15 20:39:53.757378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.370 [2024-07-15 20:39:53.757622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.370 [2024-07-15 20:39:53.757648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.370 [2024-07-15 20:39:53.757664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.370 [2024-07-15 20:39:53.761255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.370 [2024-07-15 20:39:53.770553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.370 [2024-07-15 20:39:53.770977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.370 [2024-07-15 20:39:53.771010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.370 [2024-07-15 20:39:53.771028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.370 [2024-07-15 20:39:53.771268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.370 [2024-07-15 20:39:53.771512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.370 [2024-07-15 20:39:53.771537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.370 [2024-07-15 20:39:53.771553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.370 [2024-07-15 20:39:53.775137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.370 [2024-07-15 20:39:53.784440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.370 [2024-07-15 20:39:53.784893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.370 [2024-07-15 20:39:53.784924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.371 [2024-07-15 20:39:53.784942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.371 [2024-07-15 20:39:53.785181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.371 [2024-07-15 20:39:53.785424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.371 [2024-07-15 20:39:53.785463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.371 [2024-07-15 20:39:53.785480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.371 [2024-07-15 20:39:53.789078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.371 [2024-07-15 20:39:53.798391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.371 [2024-07-15 20:39:53.798817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.371 [2024-07-15 20:39:53.798851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.371 [2024-07-15 20:39:53.798869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.371 [2024-07-15 20:39:53.799121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.371 [2024-07-15 20:39:53.799365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.371 [2024-07-15 20:39:53.799390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.371 [2024-07-15 20:39:53.799406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.371 [2024-07-15 20:39:53.802993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.371 [2024-07-15 20:39:53.812302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.371 [2024-07-15 20:39:53.812921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.371 [2024-07-15 20:39:53.812958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.371 [2024-07-15 20:39:53.812975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.371 [2024-07-15 20:39:53.813215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.371 [2024-07-15 20:39:53.813457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.371 [2024-07-15 20:39:53.813482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.371 [2024-07-15 20:39:53.813497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.371 [2024-07-15 20:39:53.817090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.371 [2024-07-15 20:39:53.826186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.371 [2024-07-15 20:39:53.826810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.371 [2024-07-15 20:39:53.826862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.371 [2024-07-15 20:39:53.826888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.371 [2024-07-15 20:39:53.827129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.371 [2024-07-15 20:39:53.827372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.371 [2024-07-15 20:39:53.827396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.371 [2024-07-15 20:39:53.827411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.371 [2024-07-15 20:39:53.831001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.371 [2024-07-15 20:39:53.840089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.371 [2024-07-15 20:39:53.840563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.371 [2024-07-15 20:39:53.840595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.371 [2024-07-15 20:39:53.840612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.371 [2024-07-15 20:39:53.840851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.371 [2024-07-15 20:39:53.841105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.371 [2024-07-15 20:39:53.841131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.371 [2024-07-15 20:39:53.841147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.371 [2024-07-15 20:39:53.844725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.371 [2024-07-15 20:39:53.854031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.371 [2024-07-15 20:39:53.854490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.371 [2024-07-15 20:39:53.854522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.371 [2024-07-15 20:39:53.854540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.371 [2024-07-15 20:39:53.854778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.371 [2024-07-15 20:39:53.855034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.371 [2024-07-15 20:39:53.855059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.371 [2024-07-15 20:39:53.855075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.371 [2024-07-15 20:39:53.858656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.371 [2024-07-15 20:39:53.867957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.371 [2024-07-15 20:39:53.868407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.371 [2024-07-15 20:39:53.868438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.371 [2024-07-15 20:39:53.868455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.371 [2024-07-15 20:39:53.868694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.371 [2024-07-15 20:39:53.868950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.371 [2024-07-15 20:39:53.868976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.371 [2024-07-15 20:39:53.868992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.371 [2024-07-15 20:39:53.872570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.371 [2024-07-15 20:39:53.881891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.371 [2024-07-15 20:39:53.882351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.371 [2024-07-15 20:39:53.882382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.371 [2024-07-15 20:39:53.882400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.371 [2024-07-15 20:39:53.882644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.371 [2024-07-15 20:39:53.882901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.371 [2024-07-15 20:39:53.882926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.371 [2024-07-15 20:39:53.882942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.371 [2024-07-15 20:39:53.886523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.371 [2024-07-15 20:39:53.895823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.371 [2024-07-15 20:39:53.896282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.371 [2024-07-15 20:39:53.896314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.371 [2024-07-15 20:39:53.896331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.371 [2024-07-15 20:39:53.896571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.371 [2024-07-15 20:39:53.896814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.371 [2024-07-15 20:39:53.896838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.371 [2024-07-15 20:39:53.896854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.629 [2024-07-15 20:39:53.900443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.629 [2024-07-15 20:39:53.909749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.629 [2024-07-15 20:39:53.910209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.629 [2024-07-15 20:39:53.910240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.629 [2024-07-15 20:39:53.910258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.629 [2024-07-15 20:39:53.910496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.629 [2024-07-15 20:39:53.910739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.629 [2024-07-15 20:39:53.910763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.629 [2024-07-15 20:39:53.910778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.629 [2024-07-15 20:39:53.914367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.629 [2024-07-15 20:39:53.923676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.629 [2024-07-15 20:39:53.924119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.629 [2024-07-15 20:39:53.924151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.629 [2024-07-15 20:39:53.924168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.629 [2024-07-15 20:39:53.924407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.629 [2024-07-15 20:39:53.924651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.629 [2024-07-15 20:39:53.924675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.629 [2024-07-15 20:39:53.924695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.629 [2024-07-15 20:39:53.928285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.629 [2024-07-15 20:39:53.937626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.629 [2024-07-15 20:39:53.938089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.629 [2024-07-15 20:39:53.938121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.629 [2024-07-15 20:39:53.938138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.629 [2024-07-15 20:39:53.938376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.629 [2024-07-15 20:39:53.938619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.629 [2024-07-15 20:39:53.938643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.629 [2024-07-15 20:39:53.938658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.630 [2024-07-15 20:39:53.942246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.630 [2024-07-15 20:39:53.951547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.630 [2024-07-15 20:39:53.952054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.630 [2024-07-15 20:39:53.952105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.630 [2024-07-15 20:39:53.952123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.630 [2024-07-15 20:39:53.952361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.630 [2024-07-15 20:39:53.952604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.630 [2024-07-15 20:39:53.952628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.630 [2024-07-15 20:39:53.952645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.630 [2024-07-15 20:39:53.956237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.630 [2024-07-15 20:39:53.965534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.630 [2024-07-15 20:39:53.965986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.630 [2024-07-15 20:39:53.966018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.630 [2024-07-15 20:39:53.966036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.630 [2024-07-15 20:39:53.966275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.630 [2024-07-15 20:39:53.966517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.630 [2024-07-15 20:39:53.966541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.630 [2024-07-15 20:39:53.966556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.630 [2024-07-15 20:39:53.970147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.630 [2024-07-15 20:39:53.979447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.630 [2024-07-15 20:39:53.979897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.630 [2024-07-15 20:39:53.979933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.630 [2024-07-15 20:39:53.979952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.630 [2024-07-15 20:39:53.980191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.630 [2024-07-15 20:39:53.980435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.630 [2024-07-15 20:39:53.980459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.630 [2024-07-15 20:39:53.980474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.630 [2024-07-15 20:39:53.984061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.630 [2024-07-15 20:39:53.993354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.630 [2024-07-15 20:39:53.993800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.630 [2024-07-15 20:39:53.993831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.630 [2024-07-15 20:39:53.993849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.630 [2024-07-15 20:39:53.994096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.630 [2024-07-15 20:39:53.994339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.630 [2024-07-15 20:39:53.994364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.630 [2024-07-15 20:39:53.994380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.630 [2024-07-15 20:39:53.997961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.630 [2024-07-15 20:39:54.007258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.630 [2024-07-15 20:39:54.007707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.630 [2024-07-15 20:39:54.007738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.630 [2024-07-15 20:39:54.007756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.630 [2024-07-15 20:39:54.008005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.630 [2024-07-15 20:39:54.008248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.630 [2024-07-15 20:39:54.008273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.630 [2024-07-15 20:39:54.008288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.630 [2024-07-15 20:39:54.011860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.630 [2024-07-15 20:39:54.021156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.630 [2024-07-15 20:39:54.021616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.630 [2024-07-15 20:39:54.021647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.630 [2024-07-15 20:39:54.021665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.630 [2024-07-15 20:39:54.021916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.630 [2024-07-15 20:39:54.022166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.630 [2024-07-15 20:39:54.022190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.630 [2024-07-15 20:39:54.022205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.630 [2024-07-15 20:39:54.025781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.630 [2024-07-15 20:39:54.035079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.630 [2024-07-15 20:39:54.035528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.630 [2024-07-15 20:39:54.035559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.630 [2024-07-15 20:39:54.035576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.630 [2024-07-15 20:39:54.035815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.630 [2024-07-15 20:39:54.036071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.630 [2024-07-15 20:39:54.036096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.630 [2024-07-15 20:39:54.036111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.630 [2024-07-15 20:39:54.039685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.630 [2024-07-15 20:39:54.048972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.630 [2024-07-15 20:39:54.049431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.630 [2024-07-15 20:39:54.049462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.630 [2024-07-15 20:39:54.049480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.630 [2024-07-15 20:39:54.049718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.630 [2024-07-15 20:39:54.049972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.630 [2024-07-15 20:39:54.049997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.630 [2024-07-15 20:39:54.050012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.630 [2024-07-15 20:39:54.053583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.630 [2024-07-15 20:39:54.062863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.630 [2024-07-15 20:39:54.063311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.630 [2024-07-15 20:39:54.063342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.630 [2024-07-15 20:39:54.063360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.630 [2024-07-15 20:39:54.063598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.630 [2024-07-15 20:39:54.063841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.630 [2024-07-15 20:39:54.063866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.630 [2024-07-15 20:39:54.063891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.630 [2024-07-15 20:39:54.067561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.630 [2024-07-15 20:39:54.076855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.630 [2024-07-15 20:39:54.077320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.630 [2024-07-15 20:39:54.077352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.630 [2024-07-15 20:39:54.077369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.630 [2024-07-15 20:39:54.077608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.630 [2024-07-15 20:39:54.077851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.630 [2024-07-15 20:39:54.077884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.630 [2024-07-15 20:39:54.077902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.630 [2024-07-15 20:39:54.081478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.630 [2024-07-15 20:39:54.090775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.630 [2024-07-15 20:39:54.091208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.630 [2024-07-15 20:39:54.091240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.630 [2024-07-15 20:39:54.091258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.630 [2024-07-15 20:39:54.091497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.630 [2024-07-15 20:39:54.091740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.631 [2024-07-15 20:39:54.091765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.631 [2024-07-15 20:39:54.091780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.631 [2024-07-15 20:39:54.095362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.631 [2024-07-15 20:39:54.104305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.631 [2024-07-15 20:39:54.104702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.631 [2024-07-15 20:39:54.104730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.631 [2024-07-15 20:39:54.104745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.631 [2024-07-15 20:39:54.105007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.631 [2024-07-15 20:39:54.105213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.631 [2024-07-15 20:39:54.105234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.631 [2024-07-15 20:39:54.105246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.631 [2024-07-15 20:39:54.108343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.631 [2024-07-15 20:39:54.118186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.631 [2024-07-15 20:39:54.118622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.631 [2024-07-15 20:39:54.118667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.631 [2024-07-15 20:39:54.118690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.631 [2024-07-15 20:39:54.118952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.631 [2024-07-15 20:39:54.119183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.631 [2024-07-15 20:39:54.119207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.631 [2024-07-15 20:39:54.119223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.631 [2024-07-15 20:39:54.122698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.631 [2024-07-15 20:39:54.132123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.631 [2024-07-15 20:39:54.132615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.631 [2024-07-15 20:39:54.132646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.631 [2024-07-15 20:39:54.132664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.631 [2024-07-15 20:39:54.132912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.631 [2024-07-15 20:39:54.133137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.631 [2024-07-15 20:39:54.133173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.631 [2024-07-15 20:39:54.133186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.631 [2024-07-15 20:39:54.136725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.631 [2024-07-15 20:39:54.146149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.631 [2024-07-15 20:39:54.146620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.631 [2024-07-15 20:39:54.146652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.631 [2024-07-15 20:39:54.146670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.631 [2024-07-15 20:39:54.146933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.631 [2024-07-15 20:39:54.147146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.631 [2024-07-15 20:39:54.147183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.631 [2024-07-15 20:39:54.147198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.631 [2024-07-15 20:39:54.150789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.890 [2024-07-15 20:39:54.159734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.890 [2024-07-15 20:39:54.160152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.890 [2024-07-15 20:39:54.160181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.890 [2024-07-15 20:39:54.160197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.890 [2024-07-15 20:39:54.160412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.890 [2024-07-15 20:39:54.160630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.890 [2024-07-15 20:39:54.160657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.890 [2024-07-15 20:39:54.160671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.890 [2024-07-15 20:39:54.163933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.890 [2024-07-15 20:39:54.173087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.890 [2024-07-15 20:39:54.173498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.890 [2024-07-15 20:39:54.173527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.890 [2024-07-15 20:39:54.173542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.891 [2024-07-15 20:39:54.173779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.891 [2024-07-15 20:39:54.174014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.891 [2024-07-15 20:39:54.174037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.891 [2024-07-15 20:39:54.174050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.891 [2024-07-15 20:39:54.177344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.891 [2024-07-15 20:39:54.187169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.891 [2024-07-15 20:39:54.187712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-07-15 20:39:54.187763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.891 [2024-07-15 20:39:54.187780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.891 [2024-07-15 20:39:54.188031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.891 [2024-07-15 20:39:54.188279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.891 [2024-07-15 20:39:54.188304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.891 [2024-07-15 20:39:54.188320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.891 [2024-07-15 20:39:54.191857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.891 [2024-07-15 20:39:54.201044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.891 [2024-07-15 20:39:54.201585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-07-15 20:39:54.201616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.891 [2024-07-15 20:39:54.201634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.891 [2024-07-15 20:39:54.201872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.891 [2024-07-15 20:39:54.202119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.891 [2024-07-15 20:39:54.202141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.891 [2024-07-15 20:39:54.202155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.891 [2024-07-15 20:39:54.205748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.891 [2024-07-15 20:39:54.214962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.891 [2024-07-15 20:39:54.215477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-07-15 20:39:54.215526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.891 [2024-07-15 20:39:54.215543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.891 [2024-07-15 20:39:54.215783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.891 [2024-07-15 20:39:54.216039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.891 [2024-07-15 20:39:54.216062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.891 [2024-07-15 20:39:54.216076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.891 [2024-07-15 20:39:54.219689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.891 [2024-07-15 20:39:54.228993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.891 [2024-07-15 20:39:54.229478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-07-15 20:39:54.229506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.891 [2024-07-15 20:39:54.229522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.891 [2024-07-15 20:39:54.229776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.891 [2024-07-15 20:39:54.230014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.891 [2024-07-15 20:39:54.230037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.891 [2024-07-15 20:39:54.230051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.891 [2024-07-15 20:39:54.233681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.891 [2024-07-15 20:39:54.242983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.891 [2024-07-15 20:39:54.243454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-07-15 20:39:54.243483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.891 [2024-07-15 20:39:54.243499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.891 [2024-07-15 20:39:54.243757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.891 [2024-07-15 20:39:54.244018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.891 [2024-07-15 20:39:54.244041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.891 [2024-07-15 20:39:54.244055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.891 [2024-07-15 20:39:54.247666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.891 [2024-07-15 20:39:54.257009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.891 [2024-07-15 20:39:54.257469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-07-15 20:39:54.257496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.891 [2024-07-15 20:39:54.257517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.891 [2024-07-15 20:39:54.257772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.891 [2024-07-15 20:39:54.258032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.891 [2024-07-15 20:39:54.258055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.891 [2024-07-15 20:39:54.258070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.891 [2024-07-15 20:39:54.261692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.891 [2024-07-15 20:39:54.270909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.891 [2024-07-15 20:39:54.271352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-07-15 20:39:54.271395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.891 [2024-07-15 20:39:54.271411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.891 [2024-07-15 20:39:54.271668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.891 [2024-07-15 20:39:54.271937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.891 [2024-07-15 20:39:54.271960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.891 [2024-07-15 20:39:54.271974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.891 [2024-07-15 20:39:54.275599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.891 [2024-07-15 20:39:54.284702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.891 [2024-07-15 20:39:54.285116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-07-15 20:39:54.285144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.891 [2024-07-15 20:39:54.285176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.891 [2024-07-15 20:39:54.285414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.891 [2024-07-15 20:39:54.285658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.891 [2024-07-15 20:39:54.285682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.891 [2024-07-15 20:39:54.285697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.891 [2024-07-15 20:39:54.289326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.891 [2024-07-15 20:39:54.298705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.891 [2024-07-15 20:39:54.299119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-07-15 20:39:54.299147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.891 [2024-07-15 20:39:54.299179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.891 [2024-07-15 20:39:54.299418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.891 [2024-07-15 20:39:54.299662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.891 [2024-07-15 20:39:54.299685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.891 [2024-07-15 20:39:54.299707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.891 [2024-07-15 20:39:54.303326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.891 [2024-07-15 20:39:54.312696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.891 [2024-07-15 20:39:54.313128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-07-15 20:39:54.313156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.891 [2024-07-15 20:39:54.313172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.891 [2024-07-15 20:39:54.313423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.891 [2024-07-15 20:39:54.313667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.891 [2024-07-15 20:39:54.313691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.891 [2024-07-15 20:39:54.313707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.891 [2024-07-15 20:39:54.317285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.891 [2024-07-15 20:39:54.326559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.891 [2024-07-15 20:39:54.327031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-07-15 20:39:54.327059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.892 [2024-07-15 20:39:54.327075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.892 [2024-07-15 20:39:54.327328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.892 [2024-07-15 20:39:54.327572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.892 [2024-07-15 20:39:54.327596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.892 [2024-07-15 20:39:54.327611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.892 [2024-07-15 20:39:54.331190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.892 [2024-07-15 20:39:54.340488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.892 [2024-07-15 20:39:54.340937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-07-15 20:39:54.340969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.892 [2024-07-15 20:39:54.340987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.892 [2024-07-15 20:39:54.341227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.892 [2024-07-15 20:39:54.341470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.892 [2024-07-15 20:39:54.341494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.892 [2024-07-15 20:39:54.341510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.892 [2024-07-15 20:39:54.345094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.892 [2024-07-15 20:39:54.354447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.892 [2024-07-15 20:39:54.354889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-07-15 20:39:54.354921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.892 [2024-07-15 20:39:54.354938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.892 [2024-07-15 20:39:54.355177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.892 [2024-07-15 20:39:54.355421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.892 [2024-07-15 20:39:54.355445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.892 [2024-07-15 20:39:54.355460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.892 [2024-07-15 20:39:54.359043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.892 [2024-07-15 20:39:54.368329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.892 [2024-07-15 20:39:54.368782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-07-15 20:39:54.368813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.892 [2024-07-15 20:39:54.368830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.892 [2024-07-15 20:39:54.369078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.892 [2024-07-15 20:39:54.369322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.892 [2024-07-15 20:39:54.369347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.892 [2024-07-15 20:39:54.369362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.892 [2024-07-15 20:39:54.372944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.892 [2024-07-15 20:39:54.382233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.892 [2024-07-15 20:39:54.382687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-07-15 20:39:54.382717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.892 [2024-07-15 20:39:54.382734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.892 [2024-07-15 20:39:54.382982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.892 [2024-07-15 20:39:54.383225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.892 [2024-07-15 20:39:54.383250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.892 [2024-07-15 20:39:54.383265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.892 [2024-07-15 20:39:54.386860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.892 [2024-07-15 20:39:54.396154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.892 [2024-07-15 20:39:54.396580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-07-15 20:39:54.396611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.892 [2024-07-15 20:39:54.396629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.892 [2024-07-15 20:39:54.396874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.892 [2024-07-15 20:39:54.397128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.892 [2024-07-15 20:39:54.397152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.892 [2024-07-15 20:39:54.397167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.892 [2024-07-15 20:39:54.400744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.892 [2024-07-15 20:39:54.410037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.892 [2024-07-15 20:39:54.410499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-07-15 20:39:54.410530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:15.892 [2024-07-15 20:39:54.410548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:15.892 [2024-07-15 20:39:54.410787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:15.892 [2024-07-15 20:39:54.411041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.892 [2024-07-15 20:39:54.411066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.892 [2024-07-15 20:39:54.411081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.892 [2024-07-15 20:39:54.414655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.151 [2024-07-15 20:39:54.423948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.151 [2024-07-15 20:39:54.424388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.151 [2024-07-15 20:39:54.424419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.151 [2024-07-15 20:39:54.424437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.151 [2024-07-15 20:39:54.424675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.151 [2024-07-15 20:39:54.424929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.151 [2024-07-15 20:39:54.424954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.151 [2024-07-15 20:39:54.424969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.151 [2024-07-15 20:39:54.428543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.151 [2024-07-15 20:39:54.437829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.151 [2024-07-15 20:39:54.438308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.151 [2024-07-15 20:39:54.438339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.151 [2024-07-15 20:39:54.438357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.151 [2024-07-15 20:39:54.438595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.151 [2024-07-15 20:39:54.438838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.151 [2024-07-15 20:39:54.438862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.151 [2024-07-15 20:39:54.438897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.151 [2024-07-15 20:39:54.442475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.151 [2024-07-15 20:39:54.451765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.151 [2024-07-15 20:39:54.452220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.151 [2024-07-15 20:39:54.452251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.151 [2024-07-15 20:39:54.452269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.151 [2024-07-15 20:39:54.452508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.151 [2024-07-15 20:39:54.452750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.151 [2024-07-15 20:39:54.452774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.151 [2024-07-15 20:39:54.452789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.151 [2024-07-15 20:39:54.456375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.151 [2024-07-15 20:39:54.465657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.151 [2024-07-15 20:39:54.466121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.151 [2024-07-15 20:39:54.466154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.151 [2024-07-15 20:39:54.466172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.151 [2024-07-15 20:39:54.466411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.151 [2024-07-15 20:39:54.466654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.151 [2024-07-15 20:39:54.466678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.151 [2024-07-15 20:39:54.466694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.151 [2024-07-15 20:39:54.470281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.151 [2024-07-15 20:39:54.479568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.151 [2024-07-15 20:39:54.480027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.151 [2024-07-15 20:39:54.480058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.151 [2024-07-15 20:39:54.480075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.151 [2024-07-15 20:39:54.480314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.151 [2024-07-15 20:39:54.480557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.151 [2024-07-15 20:39:54.480581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.151 [2024-07-15 20:39:54.480597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.151 [2024-07-15 20:39:54.484181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.151 [2024-07-15 20:39:54.493468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.151 [2024-07-15 20:39:54.493930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.151 [2024-07-15 20:39:54.493966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.151 [2024-07-15 20:39:54.493985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.151 [2024-07-15 20:39:54.494223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.151 [2024-07-15 20:39:54.494465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.151 [2024-07-15 20:39:54.494490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.151 [2024-07-15 20:39:54.494505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.151 [2024-07-15 20:39:54.498091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.151 [2024-07-15 20:39:54.507382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.151 [2024-07-15 20:39:54.507833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.151 [2024-07-15 20:39:54.507864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.151 [2024-07-15 20:39:54.507892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.152 [2024-07-15 20:39:54.508132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.152 [2024-07-15 20:39:54.508376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.152 [2024-07-15 20:39:54.508400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.152 [2024-07-15 20:39:54.508415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.152 [2024-07-15 20:39:54.511999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.152 [2024-07-15 20:39:54.521291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.152 [2024-07-15 20:39:54.521766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.152 [2024-07-15 20:39:54.521813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.152 [2024-07-15 20:39:54.521830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.152 [2024-07-15 20:39:54.522079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.152 [2024-07-15 20:39:54.522323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.152 [2024-07-15 20:39:54.522348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.152 [2024-07-15 20:39:54.522363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.152 [2024-07-15 20:39:54.525943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.152 [2024-07-15 20:39:54.535229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.152 [2024-07-15 20:39:54.535679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.152 [2024-07-15 20:39:54.535710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.152 [2024-07-15 20:39:54.535727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.152 [2024-07-15 20:39:54.535977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.152 [2024-07-15 20:39:54.536226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.152 [2024-07-15 20:39:54.536250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.152 [2024-07-15 20:39:54.536265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.152 [2024-07-15 20:39:54.539842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.152 [2024-07-15 20:39:54.549141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.152 [2024-07-15 20:39:54.549654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.152 [2024-07-15 20:39:54.549702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.152 [2024-07-15 20:39:54.549720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.152 [2024-07-15 20:39:54.549969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.152 [2024-07-15 20:39:54.550213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.152 [2024-07-15 20:39:54.550238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.152 [2024-07-15 20:39:54.550253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.152 [2024-07-15 20:39:54.553828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.152 [2024-07-15 20:39:54.563163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.152 [2024-07-15 20:39:54.563622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.152 [2024-07-15 20:39:54.563669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.152 [2024-07-15 20:39:54.563687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.152 [2024-07-15 20:39:54.563937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.152 [2024-07-15 20:39:54.564181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.152 [2024-07-15 20:39:54.564205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.152 [2024-07-15 20:39:54.564221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.152 [2024-07-15 20:39:54.567794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.152 [2024-07-15 20:39:54.577112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.152 [2024-07-15 20:39:54.577564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.152 [2024-07-15 20:39:54.577595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.152 [2024-07-15 20:39:54.577613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.152 [2024-07-15 20:39:54.577851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.152 [2024-07-15 20:39:54.578105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.152 [2024-07-15 20:39:54.578130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.152 [2024-07-15 20:39:54.578146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.152 [2024-07-15 20:39:54.581723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.152 [2024-07-15 20:39:54.591015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.152 [2024-07-15 20:39:54.591440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.152 [2024-07-15 20:39:54.591471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.152 [2024-07-15 20:39:54.591489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.152 [2024-07-15 20:39:54.591728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.152 [2024-07-15 20:39:54.591982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.152 [2024-07-15 20:39:54.592007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.152 [2024-07-15 20:39:54.592023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.152 [2024-07-15 20:39:54.595600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.152 [2024-07-15 20:39:54.604896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.152 [2024-07-15 20:39:54.605345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.152 [2024-07-15 20:39:54.605377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.152 [2024-07-15 20:39:54.605394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.152 [2024-07-15 20:39:54.605632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.152 [2024-07-15 20:39:54.605886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.152 [2024-07-15 20:39:54.605910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.152 [2024-07-15 20:39:54.605925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.152 [2024-07-15 20:39:54.609500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.152 [2024-07-15 20:39:54.618784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.152 [2024-07-15 20:39:54.619241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.152 [2024-07-15 20:39:54.619272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.152 [2024-07-15 20:39:54.619290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.152 [2024-07-15 20:39:54.619528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.152 [2024-07-15 20:39:54.619771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.152 [2024-07-15 20:39:54.619795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.152 [2024-07-15 20:39:54.619810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.152 [2024-07-15 20:39:54.623398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.152 [2024-07-15 20:39:54.632686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.152 [2024-07-15 20:39:54.633127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.152 [2024-07-15 20:39:54.633158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.152 [2024-07-15 20:39:54.633181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.152 [2024-07-15 20:39:54.633420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.152 [2024-07-15 20:39:54.633662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.152 [2024-07-15 20:39:54.633687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.152 [2024-07-15 20:39:54.633702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.152 [2024-07-15 20:39:54.637285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.152 [2024-07-15 20:39:54.646570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.152 [2024-07-15 20:39:54.647033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.152 [2024-07-15 20:39:54.647065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.152 [2024-07-15 20:39:54.647083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.152 [2024-07-15 20:39:54.647321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.152 [2024-07-15 20:39:54.647565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.152 [2024-07-15 20:39:54.647589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.152 [2024-07-15 20:39:54.647604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.152 [2024-07-15 20:39:54.651190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.152 [2024-07-15 20:39:54.660474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.152 [2024-07-15 20:39:54.660896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.152 [2024-07-15 20:39:54.660927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.152 [2024-07-15 20:39:54.660944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.153 [2024-07-15 20:39:54.661182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.153 [2024-07-15 20:39:54.661425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.153 [2024-07-15 20:39:54.661449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.153 [2024-07-15 20:39:54.661464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.153 [2024-07-15 20:39:54.665048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.153 [2024-07-15 20:39:54.674333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.153 [2024-07-15 20:39:54.674792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.153 [2024-07-15 20:39:54.674823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.153 [2024-07-15 20:39:54.674841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.153 [2024-07-15 20:39:54.675087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.153 [2024-07-15 20:39:54.675331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.153 [2024-07-15 20:39:54.675361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.153 [2024-07-15 20:39:54.675378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.153 [2024-07-15 20:39:54.678959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.411 [2024-07-15 20:39:54.688247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.411 [2024-07-15 20:39:54.688694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-15 20:39:54.688725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.411 [2024-07-15 20:39:54.688742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.411 [2024-07-15 20:39:54.688991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.411 [2024-07-15 20:39:54.689234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.411 [2024-07-15 20:39:54.689258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.411 [2024-07-15 20:39:54.689274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.411 [2024-07-15 20:39:54.692848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.411 [2024-07-15 20:39:54.702137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.411 [2024-07-15 20:39:54.702751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-15 20:39:54.702804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.411 [2024-07-15 20:39:54.702821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.411 [2024-07-15 20:39:54.703069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.411 [2024-07-15 20:39:54.703312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.411 [2024-07-15 20:39:54.703336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.411 [2024-07-15 20:39:54.703352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.411 [2024-07-15 20:39:54.706958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.411 [2024-07-15 20:39:54.716037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.411 [2024-07-15 20:39:54.716499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-15 20:39:54.716530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.411 [2024-07-15 20:39:54.716547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.411 [2024-07-15 20:39:54.716786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.411 [2024-07-15 20:39:54.717041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.411 [2024-07-15 20:39:54.717066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.411 [2024-07-15 20:39:54.717081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.411 [2024-07-15 20:39:54.720655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.412 [2024-07-15 20:39:54.729945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.412 [2024-07-15 20:39:54.730398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-15 20:39:54.730429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.412 [2024-07-15 20:39:54.730447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.412 [2024-07-15 20:39:54.730685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.412 [2024-07-15 20:39:54.730939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.412 [2024-07-15 20:39:54.730964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.412 [2024-07-15 20:39:54.730980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.412 [2024-07-15 20:39:54.734552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.412 [2024-07-15 20:39:54.743835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.412 [2024-07-15 20:39:54.744297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-15 20:39:54.744328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.412 [2024-07-15 20:39:54.744346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.412 [2024-07-15 20:39:54.744584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.412 [2024-07-15 20:39:54.744826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.412 [2024-07-15 20:39:54.744851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.412 [2024-07-15 20:39:54.744866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.412 [2024-07-15 20:39:54.748450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.412 [2024-07-15 20:39:54.757730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.412 [2024-07-15 20:39:54.758196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-15 20:39:54.758227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.412 [2024-07-15 20:39:54.758245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.412 [2024-07-15 20:39:54.758482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.412 [2024-07-15 20:39:54.758725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.412 [2024-07-15 20:39:54.758749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.412 [2024-07-15 20:39:54.758764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.412 [2024-07-15 20:39:54.762350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.412 [2024-07-15 20:39:54.771679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.412 [2024-07-15 20:39:54.772117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-15 20:39:54.772149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.412 [2024-07-15 20:39:54.772166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.412 [2024-07-15 20:39:54.772411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.412 [2024-07-15 20:39:54.772655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.412 [2024-07-15 20:39:54.772680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.412 [2024-07-15 20:39:54.772695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.412 [2024-07-15 20:39:54.776280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.412 [2024-07-15 20:39:54.785571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.412 [2024-07-15 20:39:54.786031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-15 20:39:54.786063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.412 [2024-07-15 20:39:54.786080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.412 [2024-07-15 20:39:54.786319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.412 [2024-07-15 20:39:54.786563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.412 [2024-07-15 20:39:54.786587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.412 [2024-07-15 20:39:54.786602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.412 [2024-07-15 20:39:54.790192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.412 [2024-07-15 20:39:54.799484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.412 [2024-07-15 20:39:54.799906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-15 20:39:54.799938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.412 [2024-07-15 20:39:54.799956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.412 [2024-07-15 20:39:54.800194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.412 [2024-07-15 20:39:54.800438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.412 [2024-07-15 20:39:54.800462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.412 [2024-07-15 20:39:54.800477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.412 [2024-07-15 20:39:54.804060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.412 [2024-07-15 20:39:54.813351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.412 [2024-07-15 20:39:54.813805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-15 20:39:54.813836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.412 [2024-07-15 20:39:54.813853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.412 [2024-07-15 20:39:54.814098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.412 [2024-07-15 20:39:54.814342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.412 [2024-07-15 20:39:54.814366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.412 [2024-07-15 20:39:54.814387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.412 [2024-07-15 20:39:54.817970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.412 [2024-07-15 20:39:54.827255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.412 [2024-07-15 20:39:54.827871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-15 20:39:54.827940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.412 [2024-07-15 20:39:54.827958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.412 [2024-07-15 20:39:54.828196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.412 [2024-07-15 20:39:54.828439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.412 [2024-07-15 20:39:54.828463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.412 [2024-07-15 20:39:54.828478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.412 [2024-07-15 20:39:54.832062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.412 [2024-07-15 20:39:54.841163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.412 [2024-07-15 20:39:54.841627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-15 20:39:54.841658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.412 [2024-07-15 20:39:54.841676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.412 [2024-07-15 20:39:54.841924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.412 [2024-07-15 20:39:54.842168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.412 [2024-07-15 20:39:54.842192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.412 [2024-07-15 20:39:54.842208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.412 [2024-07-15 20:39:54.845781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.412 [2024-07-15 20:39:54.855077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.412 [2024-07-15 20:39:54.855541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-15 20:39:54.855573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.412 [2024-07-15 20:39:54.855591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.412 [2024-07-15 20:39:54.855829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.412 [2024-07-15 20:39:54.856083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.412 [2024-07-15 20:39:54.856109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.412 [2024-07-15 20:39:54.856124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.412 [2024-07-15 20:39:54.859698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.412 [2024-07-15 20:39:54.868987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.412 [2024-07-15 20:39:54.869442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-15 20:39:54.869473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.412 [2024-07-15 20:39:54.869490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.412 [2024-07-15 20:39:54.869729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.412 [2024-07-15 20:39:54.869984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.412 [2024-07-15 20:39:54.870009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.412 [2024-07-15 20:39:54.870025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.413 [2024-07-15 20:39:54.873600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.413 [2024-07-15 20:39:54.882890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.413 [2024-07-15 20:39:54.883342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.413 [2024-07-15 20:39:54.883372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.413 [2024-07-15 20:39:54.883390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.413 [2024-07-15 20:39:54.883628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.413 [2024-07-15 20:39:54.883870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.413 [2024-07-15 20:39:54.883905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.413 [2024-07-15 20:39:54.883921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.413 [2024-07-15 20:39:54.887500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.413 [2024-07-15 20:39:54.896783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.413 [2024-07-15 20:39:54.897242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.413 [2024-07-15 20:39:54.897273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.413 [2024-07-15 20:39:54.897291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.413 [2024-07-15 20:39:54.897529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.413 [2024-07-15 20:39:54.897771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.413 [2024-07-15 20:39:54.897796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.413 [2024-07-15 20:39:54.897811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.413 [2024-07-15 20:39:54.901423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.413 [2024-07-15 20:39:54.910707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.413 [2024-07-15 20:39:54.911162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.413 [2024-07-15 20:39:54.911194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.413 [2024-07-15 20:39:54.911211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.413 [2024-07-15 20:39:54.911454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.413 [2024-07-15 20:39:54.911698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.413 [2024-07-15 20:39:54.911722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.413 [2024-07-15 20:39:54.911737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.413 [2024-07-15 20:39:54.915322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.413 [2024-07-15 20:39:54.924604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.413 [2024-07-15 20:39:54.925059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.413 [2024-07-15 20:39:54.925090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.413 [2024-07-15 20:39:54.925108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.413 [2024-07-15 20:39:54.925346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.413 [2024-07-15 20:39:54.925589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.413 [2024-07-15 20:39:54.925613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.413 [2024-07-15 20:39:54.925629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.413 [2024-07-15 20:39:54.929211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.413 [2024-07-15 20:39:54.938490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.413 [2024-07-15 20:39:54.938942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.413 [2024-07-15 20:39:54.938973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.413 [2024-07-15 20:39:54.938991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.413 [2024-07-15 20:39:54.939229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.413 [2024-07-15 20:39:54.939472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.413 [2024-07-15 20:39:54.939496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.413 [2024-07-15 20:39:54.939512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.672 [2024-07-15 20:39:54.943100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.672 [2024-07-15 20:39:54.952389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.672 [2024-07-15 20:39:54.952849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.672 [2024-07-15 20:39:54.952886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.672 [2024-07-15 20:39:54.952906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.672 [2024-07-15 20:39:54.953155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.672 [2024-07-15 20:39:54.953399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.672 [2024-07-15 20:39:54.953422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.672 [2024-07-15 20:39:54.953443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.672 [2024-07-15 20:39:54.957026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.672 [2024-07-15 20:39:54.966313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.672 [2024-07-15 20:39:54.966834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.672 [2024-07-15 20:39:54.966865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.672 [2024-07-15 20:39:54.966892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.672 [2024-07-15 20:39:54.967134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.672 [2024-07-15 20:39:54.967377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.672 [2024-07-15 20:39:54.967402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.672 [2024-07-15 20:39:54.967417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.672 [2024-07-15 20:39:54.970998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.673 [2024-07-15 20:39:54.980319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.673 [2024-07-15 20:39:54.980747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.673 [2024-07-15 20:39:54.980778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.673 [2024-07-15 20:39:54.980796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.673 [2024-07-15 20:39:54.981045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.673 [2024-07-15 20:39:54.981290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.673 [2024-07-15 20:39:54.981315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.673 [2024-07-15 20:39:54.981330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.673 [2024-07-15 20:39:54.984912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.673 [2024-07-15 20:39:54.994202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.673 [2024-07-15 20:39:54.994625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.673 [2024-07-15 20:39:54.994656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.673 [2024-07-15 20:39:54.994673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.673 [2024-07-15 20:39:54.994923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.673 [2024-07-15 20:39:54.995167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.673 [2024-07-15 20:39:54.995190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.673 [2024-07-15 20:39:54.995206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.673 [2024-07-15 20:39:54.998782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.673 [2024-07-15 20:39:55.008074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.673 [2024-07-15 20:39:55.008532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.673 [2024-07-15 20:39:55.008568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.673 [2024-07-15 20:39:55.008587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.673 [2024-07-15 20:39:55.008825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.673 [2024-07-15 20:39:55.009079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.673 [2024-07-15 20:39:55.009104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.673 [2024-07-15 20:39:55.009120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.673 [2024-07-15 20:39:55.012696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.673 [2024-07-15 20:39:55.021988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.673 [2024-07-15 20:39:55.022438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.673 [2024-07-15 20:39:55.022469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.673 [2024-07-15 20:39:55.022487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.673 [2024-07-15 20:39:55.022726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.673 [2024-07-15 20:39:55.022979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.673 [2024-07-15 20:39:55.023004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.673 [2024-07-15 20:39:55.023020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.673 [2024-07-15 20:39:55.026592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.673 [2024-07-15 20:39:55.035881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.673 [2024-07-15 20:39:55.036327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.673 [2024-07-15 20:39:55.036358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.673 [2024-07-15 20:39:55.036375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.673 [2024-07-15 20:39:55.036614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.673 [2024-07-15 20:39:55.036856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.673 [2024-07-15 20:39:55.036890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.673 [2024-07-15 20:39:55.036907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.673 [2024-07-15 20:39:55.040483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.673 [2024-07-15 20:39:55.049765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.673 [2024-07-15 20:39:55.050202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.673 [2024-07-15 20:39:55.050233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.673 [2024-07-15 20:39:55.050250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.673 [2024-07-15 20:39:55.050489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.673 [2024-07-15 20:39:55.050736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.673 [2024-07-15 20:39:55.050761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.673 [2024-07-15 20:39:55.050776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.673 [2024-07-15 20:39:55.054360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.673 [2024-07-15 20:39:55.063642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.673 [2024-07-15 20:39:55.064116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.673 [2024-07-15 20:39:55.064148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.673 [2024-07-15 20:39:55.064165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.673 [2024-07-15 20:39:55.064404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.673 [2024-07-15 20:39:55.064647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.673 [2024-07-15 20:39:55.064672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.673 [2024-07-15 20:39:55.064688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.673 [2024-07-15 20:39:55.068272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.673 [2024-07-15 20:39:55.077560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.673 [2024-07-15 20:39:55.077987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.673 [2024-07-15 20:39:55.078019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.673 [2024-07-15 20:39:55.078036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.673 [2024-07-15 20:39:55.078275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.673 [2024-07-15 20:39:55.078518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.673 [2024-07-15 20:39:55.078542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.673 [2024-07-15 20:39:55.078557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.673 [2024-07-15 20:39:55.082140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.673 [2024-07-15 20:39:55.091517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.673 [2024-07-15 20:39:55.091972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.673 [2024-07-15 20:39:55.092004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.673 [2024-07-15 20:39:55.092022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.673 [2024-07-15 20:39:55.092260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.673 [2024-07-15 20:39:55.092504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.673 [2024-07-15 20:39:55.092529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.673 [2024-07-15 20:39:55.092544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.673 [2024-07-15 20:39:55.096137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.673 [2024-07-15 20:39:55.105429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.673 [2024-07-15 20:39:55.105886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.673 [2024-07-15 20:39:55.105918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.673 [2024-07-15 20:39:55.105936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.673 [2024-07-15 20:39:55.106174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.673 [2024-07-15 20:39:55.106417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.673 [2024-07-15 20:39:55.106442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.673 [2024-07-15 20:39:55.106457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.673 [2024-07-15 20:39:55.110039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.673 [2024-07-15 20:39:55.119327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.673 [2024-07-15 20:39:55.119785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.673 [2024-07-15 20:39:55.119816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.673 [2024-07-15 20:39:55.119834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.673 [2024-07-15 20:39:55.120082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.673 [2024-07-15 20:39:55.120326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.673 [2024-07-15 20:39:55.120350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.673 [2024-07-15 20:39:55.120366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.674 [2024-07-15 20:39:55.123947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.674 [2024-07-15 20:39:55.133236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.674 [2024-07-15 20:39:55.133667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.674 [2024-07-15 20:39:55.133699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.674 [2024-07-15 20:39:55.133717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.674 [2024-07-15 20:39:55.133976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.674 [2024-07-15 20:39:55.134221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.674 [2024-07-15 20:39:55.134246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.674 [2024-07-15 20:39:55.134261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.674 [2024-07-15 20:39:55.137836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.674 [2024-07-15 20:39:55.147123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.674 [2024-07-15 20:39:55.147584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.674 [2024-07-15 20:39:55.147616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.674 [2024-07-15 20:39:55.147641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.674 [2024-07-15 20:39:55.147892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.674 [2024-07-15 20:39:55.148136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.674 [2024-07-15 20:39:55.148160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.674 [2024-07-15 20:39:55.148175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.674 [2024-07-15 20:39:55.151754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.674 [2024-07-15 20:39:55.161054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.674 [2024-07-15 20:39:55.161519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.674 [2024-07-15 20:39:55.161552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.674 [2024-07-15 20:39:55.161570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.674 [2024-07-15 20:39:55.161808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.674 [2024-07-15 20:39:55.162063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.674 [2024-07-15 20:39:55.162089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.674 [2024-07-15 20:39:55.162104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.674 [2024-07-15 20:39:55.165677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.674 [2024-07-15 20:39:55.174990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.674 [2024-07-15 20:39:55.175440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.674 [2024-07-15 20:39:55.175472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.674 [2024-07-15 20:39:55.175489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.674 [2024-07-15 20:39:55.175728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.674 [2024-07-15 20:39:55.175981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.674 [2024-07-15 20:39:55.176016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.674 [2024-07-15 20:39:55.176033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.674 [2024-07-15 20:39:55.179611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.674 [2024-07-15 20:39:55.188971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.674 [2024-07-15 20:39:55.189445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.674 [2024-07-15 20:39:55.189477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.674 [2024-07-15 20:39:55.189496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.674 [2024-07-15 20:39:55.189735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.674 [2024-07-15 20:39:55.189991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.674 [2024-07-15 20:39:55.190023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.674 [2024-07-15 20:39:55.190039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.674 [2024-07-15 20:39:55.193620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.935 [2024-07-15 20:39:55.202921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.935 [2024-07-15 20:39:55.203347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.935 [2024-07-15 20:39:55.203379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.935 [2024-07-15 20:39:55.203397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.935 [2024-07-15 20:39:55.203636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.935 [2024-07-15 20:39:55.203894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.935 [2024-07-15 20:39:55.203919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.935 [2024-07-15 20:39:55.203934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.935 [2024-07-15 20:39:55.207514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.935 [2024-07-15 20:39:55.216807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.935 [2024-07-15 20:39:55.217313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.935 [2024-07-15 20:39:55.217345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.935 [2024-07-15 20:39:55.217363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.935 [2024-07-15 20:39:55.217600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.935 [2024-07-15 20:39:55.217843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.935 [2024-07-15 20:39:55.217867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.935 [2024-07-15 20:39:55.217894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.935 [2024-07-15 20:39:55.221473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.935 [2024-07-15 20:39:55.230763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.935 [2024-07-15 20:39:55.231202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.935 [2024-07-15 20:39:55.231233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.935 [2024-07-15 20:39:55.231251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.935 [2024-07-15 20:39:55.231489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.935 [2024-07-15 20:39:55.231732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.935 [2024-07-15 20:39:55.231756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.935 [2024-07-15 20:39:55.231772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.935 [2024-07-15 20:39:55.235357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 11909 Killed "${NVMF_APP[@]}" "$@" 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=12877 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 12877 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 12877 ']' 00:34:16.935 [2024-07-15 20:39:55.244664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:16.935 [2024-07-15 20:39:55.245103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.935 [2024-07-15 20:39:55.245136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.935 [2024-07-15 20:39:55.245159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.935 [2024-07-15 20:39:55.245399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:16.935 [2024-07-15 20:39:55.245646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.935 [2024-07-15 20:39:55.245672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.935 [2024-07-15 20:39:55.245690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.935 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.935 [2024-07-15 20:39:55.249284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.935 [2024-07-15 20:39:55.258584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.935 [2024-07-15 20:39:55.259024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.935 [2024-07-15 20:39:55.259055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.935 [2024-07-15 20:39:55.259073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.935 [2024-07-15 20:39:55.259311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.935 [2024-07-15 20:39:55.259554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.935 [2024-07-15 20:39:55.259578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.935 [2024-07-15 20:39:55.259594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.935 [2024-07-15 20:39:55.263186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.935 [2024-07-15 20:39:55.272511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.935 [2024-07-15 20:39:55.272970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.935 [2024-07-15 20:39:55.273002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.935 [2024-07-15 20:39:55.273020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.936 [2024-07-15 20:39:55.273258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.936 [2024-07-15 20:39:55.273501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.936 [2024-07-15 20:39:55.273526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.936 [2024-07-15 20:39:55.273542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.936 [2024-07-15 20:39:55.277130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.936 [2024-07-15 20:39:55.286436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.936 [2024-07-15 20:39:55.286905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.936 [2024-07-15 20:39:55.286936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.936 [2024-07-15 20:39:55.286954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.936 [2024-07-15 20:39:55.287193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.936 [2024-07-15 20:39:55.287436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.936 [2024-07-15 20:39:55.287460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.936 [2024-07-15 20:39:55.287475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.936 [2024-07-15 20:39:55.291061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.936 [2024-07-15 20:39:55.293275] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:34:16.936 [2024-07-15 20:39:55.293362] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.936 [2024-07-15 20:39:55.300347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.936 [2024-07-15 20:39:55.300770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.936 [2024-07-15 20:39:55.300805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.936 [2024-07-15 20:39:55.300822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.936 [2024-07-15 20:39:55.301069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.936 [2024-07-15 20:39:55.301313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.936 [2024-07-15 20:39:55.301336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.936 [2024-07-15 20:39:55.301352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.936 [2024-07-15 20:39:55.304936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.936 [2024-07-15 20:39:55.314408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.936 [2024-07-15 20:39:55.314837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.936 [2024-07-15 20:39:55.314884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.936 [2024-07-15 20:39:55.314906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.936 [2024-07-15 20:39:55.315146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.936 [2024-07-15 20:39:55.315391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.936 [2024-07-15 20:39:55.315414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.936 [2024-07-15 20:39:55.315429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.936 [2024-07-15 20:39:55.319013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.936 [2024-07-15 20:39:55.328335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.936 [2024-07-15 20:39:55.328786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.936 [2024-07-15 20:39:55.328817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.936 [2024-07-15 20:39:55.328834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.936 [2024-07-15 20:39:55.329079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.936 [2024-07-15 20:39:55.329322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.936 [2024-07-15 20:39:55.329345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.936 [2024-07-15 20:39:55.329361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.936 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.936 [2024-07-15 20:39:55.332961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.936 [2024-07-15 20:39:55.342294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.936 [2024-07-15 20:39:55.342733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.936 [2024-07-15 20:39:55.342773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.936 [2024-07-15 20:39:55.342790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.936 [2024-07-15 20:39:55.343039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.936 [2024-07-15 20:39:55.343282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.936 [2024-07-15 20:39:55.343306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.936 [2024-07-15 20:39:55.343321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.936 [2024-07-15 20:39:55.346947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.936 [2024-07-15 20:39:55.356247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.936 [2024-07-15 20:39:55.356681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.936 [2024-07-15 20:39:55.356712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.936 [2024-07-15 20:39:55.356730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.936 [2024-07-15 20:39:55.356979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.936 [2024-07-15 20:39:55.357229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.936 [2024-07-15 20:39:55.357252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.936 [2024-07-15 20:39:55.357268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.936 [2024-07-15 20:39:55.360840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.936 [2024-07-15 20:39:55.366572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:16.936 [2024-07-15 20:39:55.370180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.936 [2024-07-15 20:39:55.370656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.936 [2024-07-15 20:39:55.370689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.936 [2024-07-15 20:39:55.370706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.936 [2024-07-15 20:39:55.370969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.936 [2024-07-15 20:39:55.371220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.936 [2024-07-15 20:39:55.371244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.936 [2024-07-15 20:39:55.371259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.936 [2024-07-15 20:39:55.374901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.936 [2024-07-15 20:39:55.384267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.936 [2024-07-15 20:39:55.384931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.936 [2024-07-15 20:39:55.384973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.936 [2024-07-15 20:39:55.384994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.936 [2024-07-15 20:39:55.385247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.936 [2024-07-15 20:39:55.385493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.936 [2024-07-15 20:39:55.385518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.936 [2024-07-15 20:39:55.385536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.936 [2024-07-15 20:39:55.389134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.936 [2024-07-15 20:39:55.398318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.936 [2024-07-15 20:39:55.398787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.936 [2024-07-15 20:39:55.398818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.936 [2024-07-15 20:39:55.398836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.936 [2024-07-15 20:39:55.399083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.936 [2024-07-15 20:39:55.399327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.936 [2024-07-15 20:39:55.399351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.936 [2024-07-15 20:39:55.399377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.936 [2024-07-15 20:39:55.402962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.936 [2024-07-15 20:39:55.412273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.936 [2024-07-15 20:39:55.412757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.936 [2024-07-15 20:39:55.412789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.936 [2024-07-15 20:39:55.412807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.936 [2024-07-15 20:39:55.413054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.936 [2024-07-15 20:39:55.413298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.936 [2024-07-15 20:39:55.413322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.936 [2024-07-15 20:39:55.413337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.936 [2024-07-15 20:39:55.416922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.937 [2024-07-15 20:39:55.426235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.937 [2024-07-15 20:39:55.426892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.937 [2024-07-15 20:39:55.426936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.937 [2024-07-15 20:39:55.426957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.937 [2024-07-15 20:39:55.427204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.937 [2024-07-15 20:39:55.427461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.937 [2024-07-15 20:39:55.427485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.937 [2024-07-15 20:39:55.427502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.937 [2024-07-15 20:39:55.431085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.937 [2024-07-15 20:39:55.440177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.937 [2024-07-15 20:39:55.440649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.937 [2024-07-15 20:39:55.440680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.937 [2024-07-15 20:39:55.440697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.937 [2024-07-15 20:39:55.440943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.937 [2024-07-15 20:39:55.441186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.937 [2024-07-15 20:39:55.441210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.937 [2024-07-15 20:39:55.441225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.937 [2024-07-15 20:39:55.444800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.937 [2024-07-15 20:39:55.454095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.937 [2024-07-15 20:39:55.454575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.937 [2024-07-15 20:39:55.454606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:16.937 [2024-07-15 20:39:55.454624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:16.937 [2024-07-15 20:39:55.454867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:16.937 [2024-07-15 20:39:55.455120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.937 [2024-07-15 20:39:55.455145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.937 [2024-07-15 20:39:55.455168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.937 [2024-07-15 20:39:55.458741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.937 [2024-07-15 20:39:55.462744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.937 [2024-07-15 20:39:55.462782] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.937 [2024-07-15 20:39:55.462798] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.937 [2024-07-15 20:39:55.462811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.937 [2024-07-15 20:39:55.462822] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.197 [2024-07-15 20:39:55.463068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:17.197 [2024-07-15 20:39:55.463098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:17.197 [2024-07-15 20:39:55.463101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.197 [2024-07-15 20:39:55.468043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.197 [2024-07-15 20:39:55.468556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.197 [2024-07-15 20:39:55.468591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.197 [2024-07-15 20:39:55.468610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.197 [2024-07-15 20:39:55.468855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.197 [2024-07-15 20:39:55.469110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.197 [2024-07-15 20:39:55.469135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.197 [2024-07-15 20:39:55.469152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.197 [2024-07-15 20:39:55.472737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.197 [2024-07-15 20:39:55.482067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.197 [2024-07-15 20:39:55.482708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.197 [2024-07-15 20:39:55.482751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.197 [2024-07-15 20:39:55.482772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.197 [2024-07-15 20:39:55.483033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.197 [2024-07-15 20:39:55.483279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.197 [2024-07-15 20:39:55.483304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.197 [2024-07-15 20:39:55.483330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.197 [2024-07-15 20:39:55.486917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.197 [2024-07-15 20:39:55.496031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.197 [2024-07-15 20:39:55.496687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.197 [2024-07-15 20:39:55.496734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.197 [2024-07-15 20:39:55.496755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.197 [2024-07-15 20:39:55.497012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.197 [2024-07-15 20:39:55.497259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.197 [2024-07-15 20:39:55.497283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.197 [2024-07-15 20:39:55.497301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.197 [2024-07-15 20:39:55.500885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.197 [2024-07-15 20:39:55.509984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.197 [2024-07-15 20:39:55.510611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.197 [2024-07-15 20:39:55.510658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.197 [2024-07-15 20:39:55.510680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.197 [2024-07-15 20:39:55.510942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.197 [2024-07-15 20:39:55.511189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.197 [2024-07-15 20:39:55.511213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.197 [2024-07-15 20:39:55.511230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.197 [2024-07-15 20:39:55.514805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.197 [2024-07-15 20:39:55.523896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.197 [2024-07-15 20:39:55.524391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.197 [2024-07-15 20:39:55.524427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.197 [2024-07-15 20:39:55.524446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.197 [2024-07-15 20:39:55.524690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.197 [2024-07-15 20:39:55.524943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.197 [2024-07-15 20:39:55.524968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.197 [2024-07-15 20:39:55.524984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.197 [2024-07-15 20:39:55.528563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.197 [2024-07-15 20:39:55.537864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.197 [2024-07-15 20:39:55.538570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.197 [2024-07-15 20:39:55.538633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.197 [2024-07-15 20:39:55.538657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.197 [2024-07-15 20:39:55.538927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.197 [2024-07-15 20:39:55.539176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.197 [2024-07-15 20:39:55.539200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.197 [2024-07-15 20:39:55.539218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.197 [2024-07-15 20:39:55.542797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.197 [2024-07-15 20:39:55.551897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.197 [2024-07-15 20:39:55.552401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.197 [2024-07-15 20:39:55.552440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.197 [2024-07-15 20:39:55.552460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.197 [2024-07-15 20:39:55.552704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.197 [2024-07-15 20:39:55.552959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.197 [2024-07-15 20:39:55.552984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.197 [2024-07-15 20:39:55.553000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.197 [2024-07-15 20:39:55.556579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.197 [2024-07-15 20:39:55.565869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.197 [2024-07-15 20:39:55.566284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.197 [2024-07-15 20:39:55.566315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.197 [2024-07-15 20:39:55.566333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.197 [2024-07-15 20:39:55.566571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.197 [2024-07-15 20:39:55.566814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.197 [2024-07-15 20:39:55.566837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.197 [2024-07-15 20:39:55.566852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.197 [2024-07-15 20:39:55.570239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.197 [2024-07-15 20:39:55.579399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.197 [2024-07-15 20:39:55.579804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.197 [2024-07-15 20:39:55.579832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.197 [2024-07-15 20:39:55.579848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.197 [2024-07-15 20:39:55.580071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.197 [2024-07-15 20:39:55.580301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.197 [2024-07-15 20:39:55.580322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.197 [2024-07-15 20:39:55.580336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.197 [2024-07-15 20:39:55.583611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.197 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:17.197 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:17.197 20:39:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.198 [2024-07-15 20:39:55.593056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.198 [2024-07-15 20:39:55.593498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.198 [2024-07-15 20:39:55.593526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.198 [2024-07-15 20:39:55.593543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.198 [2024-07-15 20:39:55.593757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.198 [2024-07-15 20:39:55.594015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.198 [2024-07-15 20:39:55.594038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.198 [2024-07-15 20:39:55.594051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.198 [2024-07-15 20:39:55.597313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.198 [2024-07-15 20:39:55.606584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.198 [2024-07-15 20:39:55.607015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.198 [2024-07-15 20:39:55.607045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.198 [2024-07-15 20:39:55.607061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.198 [2024-07-15 20:39:55.607276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.198 [2024-07-15 20:39:55.607504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.198 [2024-07-15 20:39:55.607525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.198 [2024-07-15 20:39:55.607538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.198 [2024-07-15 20:39:55.610727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.198 [2024-07-15 20:39:55.613342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.198 [2024-07-15 20:39:55.620263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.198 [2024-07-15 20:39:55.620696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.198 [2024-07-15 20:39:55.620724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.198 [2024-07-15 20:39:55.620739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.198 [2024-07-15 20:39:55.620961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.198 [2024-07-15 20:39:55.621180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.198 [2024-07-15 20:39:55.621201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.198 [2024-07-15 20:39:55.621214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.198 [2024-07-15 20:39:55.624484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.198 [2024-07-15 20:39:55.633794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.198 [2024-07-15 20:39:55.634222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.198 [2024-07-15 20:39:55.634250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.198 [2024-07-15 20:39:55.634265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.198 [2024-07-15 20:39:55.634507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.198 [2024-07-15 20:39:55.634712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.198 [2024-07-15 20:39:55.634731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.198 [2024-07-15 20:39:55.634744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.198 [2024-07-15 20:39:55.637898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.198 [2024-07-15 20:39:55.647391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.198 [2024-07-15 20:39:55.647929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.198 [2024-07-15 20:39:55.647960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.198 [2024-07-15 20:39:55.647976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.198 [2024-07-15 20:39:55.648216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.198 [2024-07-15 20:39:55.648429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.198 [2024-07-15 20:39:55.648449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.198 [2024-07-15 20:39:55.648463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.198 [2024-07-15 20:39:55.651698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.198 [2024-07-15 20:39:55.660850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.198 [2024-07-15 20:39:55.661481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.198 [2024-07-15 20:39:55.661532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.198 [2024-07-15 20:39:55.661552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.198 [2024-07-15 20:39:55.661790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.198 [2024-07-15 20:39:55.662035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.198 [2024-07-15 20:39:55.662057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.198 [2024-07-15 20:39:55.662073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.198 Malloc0 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.198 [2024-07-15 20:39:55.665352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.198 [2024-07-15 20:39:55.674442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.198 [2024-07-15 20:39:55.674835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.198 [2024-07-15 20:39:55.674863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110bf70 with addr=10.0.0.2, port=4420 00:34:17.198 [2024-07-15 20:39:55.674886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bf70 is same with the state(5) to be set 00:34:17.198 [2024-07-15 20:39:55.675103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110bf70 (9): Bad file descriptor 00:34:17.198 [2024-07-15 20:39:55.675334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.198 [2024-07-15 20:39:55.675355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.198 [2024-07-15 20:39:55.675368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.198 [2024-07-15 20:39:55.678651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.198 [2024-07-15 20:39:55.683490] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.198 20:39:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 12195 00:34:17.198 [2024-07-15 20:39:55.688037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.456 [2024-07-15 20:39:55.761838] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:27.423 00:34:27.423 Latency(us) 00:34:27.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.423 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:27.423 Verification LBA range: start 0x0 length 0x4000 00:34:27.423 Nvme1n1 : 15.01 6707.86 26.20 8604.50 0.00 8333.83 867.75 22427.88 00:34:27.423 =================================================================================================================== 00:34:27.423 Total : 6707.86 26.20 8604.50 0.00 8333.83 867.75 22427.88 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:27.423 rmmod nvme_tcp 00:34:27.423 rmmod nvme_fabrics 00:34:27.423 rmmod nvme_keyring 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 12877 ']' 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 12877 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 12877 ']' 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 12877 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 12877 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 12877' 00:34:27.423 killing process with pid 12877 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 12877 00:34:27.423 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 12877 00:34:27.424 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:27.424 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:27.424 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:27.424 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:27.424 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:27.424 20:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.424 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:27.424 20:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.327 20:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:29.327 00:34:29.327 real 0m22.429s 00:34:29.327 user 0m59.147s 00:34:29.327 sys 0m4.585s 00:34:29.327 20:40:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:29.327 20:40:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:29.327 ************************************ 00:34:29.327 END TEST nvmf_bdevperf 00:34:29.327 ************************************ 00:34:29.327 20:40:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:29.327 20:40:07 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:29.327 20:40:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:29.327 20:40:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:29.327 20:40:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.327 ************************************ 00:34:29.327 START TEST nvmf_target_disconnect 00:34:29.327 ************************************ 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:29.327 * Looking for test storage... 00:34:29.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:29.327 20:40:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:31.224 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:31.224 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:31.224 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:31.224 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:31.224 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:31.224 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:31.224 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:31.224 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:31.224 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:31.224 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:31.225 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:31.225 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:31.225 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:31.225 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:31.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:31.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:34:31.225 00:34:31.225 --- 10.0.0.2 ping statistics --- 00:34:31.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.225 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:31.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:31.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:34:31.225 00:34:31.225 --- 10.0.0.1 ping statistics --- 00:34:31.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.225 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:31.225 20:40:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:31.484 ************************************ 00:34:31.484 START TEST nvmf_target_disconnect_tc1 00:34:31.484 ************************************ 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:31.484 EAL: No free 2048 kB hugepages reported on node 1 00:34:31.484 [2024-07-15 20:40:09.859321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.484 [2024-07-15 20:40:09.859396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbb590 with addr=10.0.0.2, port=4420 00:34:31.484 [2024-07-15 20:40:09.859431] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:31.484 [2024-07-15 20:40:09.859457] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:31.484 [2024-07-15 20:40:09.859473] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:31.484 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:31.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:31.484 Initializing NVMe Controllers 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:31.484 00:34:31.484 real 0m0.100s 00:34:31.484 user 0m0.042s 00:34:31.484 sys 0m0.055s 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:31.484 ************************************ 00:34:31.484 END TEST nvmf_target_disconnect_tc1 00:34:31.484 ************************************ 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:31.484 ************************************ 00:34:31.484 START TEST nvmf_target_disconnect_tc2 00:34:31.484 ************************************ 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=16035 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 16035 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 16035 ']' 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:31.484 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.485 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:31.485 20:40:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:31.485 [2024-07-15 20:40:09.978476] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:34:31.485 [2024-07-15 20:40:09.978549] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:31.749 EAL: No free 2048 kB hugepages reported on node 1 00:34:31.749 [2024-07-15 20:40:10.051007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:31.749 [2024-07-15 20:40:10.141735] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:31.749 [2024-07-15 20:40:10.141801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:31.749 [2024-07-15 20:40:10.141829] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:31.749 [2024-07-15 20:40:10.141840] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:31.749 [2024-07-15 20:40:10.141849] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:31.749 [2024-07-15 20:40:10.142004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:31.749 [2024-07-15 20:40:10.142235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:31.749 [2024-07-15 20:40:10.142298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:31.749 [2024-07-15 20:40:10.142301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:31.749 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:31.749 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:31.749 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:31.749 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:31.749 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.007 Malloc0 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.007 [2024-07-15 20:40:10.323801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.007 [2024-07-15 20:40:10.352083] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=16184 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:32.007 20:40:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:32.007 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.911 20:40:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 16035 00:34:33.911 20:40:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 [2024-07-15 20:40:12.376910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Write completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.911 Read completed with error (sct=0, sc=8) 00:34:33.911 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 [2024-07-15 20:40:12.377212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 [2024-07-15 20:40:12.377538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Read completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 Write completed with error (sct=0, sc=8) 00:34:33.912 starting I/O failed 00:34:33.912 [2024-07-15 20:40:12.377904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:33.912 [2024-07-15 20:40:12.378151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.378193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.378378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.378407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.378597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.378625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.378793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.378823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.379043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.379074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.379261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.379288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.379506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.379532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.379828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.379892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.380061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.380089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.380240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.380267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.380457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.380486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.380697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.380727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.380960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.380987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.381135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.912 [2024-07-15 20:40:12.381162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.912 qpair failed and we were unable to recover it. 00:34:33.912 [2024-07-15 20:40:12.381391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.381418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.381618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.381644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.381859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.381895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.382050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.382079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.382267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.382294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.382467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.382494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.382681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.382708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.382894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.382922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.383077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.383105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.383293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.383320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.383510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.383552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.383750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.383777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.383948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.383975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.384121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.384157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.384334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.384373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.384694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.384747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.384929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.384957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.385113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.385141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.385319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.385345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.385522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.385548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.385788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.385831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.386031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.386057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.386213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.386239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.386388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.386416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.386589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.386616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.386780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.386806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.386966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.386993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.387155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.387181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.387349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.387393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.387604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.387664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.387874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.387912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.388051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.388077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.388255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.388280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.388489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.388540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.388770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.388800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.388972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.388999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.389148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.389174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.389327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.389369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.389618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.389662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.913 qpair failed and we were unable to recover it. 00:34:33.913 [2024-07-15 20:40:12.389829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.913 [2024-07-15 20:40:12.389860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.390051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.390083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.390235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.390263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.390438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.390464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.390637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.390663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.390867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.390902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.391041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.391068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.391254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.391279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.391518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.391547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.391714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.391758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.391933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.391960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.392117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.392145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.392396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.392422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.392568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.392593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.392814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.392845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.393024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.393050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.393194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.393235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.393433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.393464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.393660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.393686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.393855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.393889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.394063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.394090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.394262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.394288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.394561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.394615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.394839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.394866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.395035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.395061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.395277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.395309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.395576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.395623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.395845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.395883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.396081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.396107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.396283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.396309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.396480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.396523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.396711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.396758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.396954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.396981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.397129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.397156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.397366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.397408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.397637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.397668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.397862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.397898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.398076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.398103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.398285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.398311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.398458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.398485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.914 [2024-07-15 20:40:12.398682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.914 [2024-07-15 20:40:12.398709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.914 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.398891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.398919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.399093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.399119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.399296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.399323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.399499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.399527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.399725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.399752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.399923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.399950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.400126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.400152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.400325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.400352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.400537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.400566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.400729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.400760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.400962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.400990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.401167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.401194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.401374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.401402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.401595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.401630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.401832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.401859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.402045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.402073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.402272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.402301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.402511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.402541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.402726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.402755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.402974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.403001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.403145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.403171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.403379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.403423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.403650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.403680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.403874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.403912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.404062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.404089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.404285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.404314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.404546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.404572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.404755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.404782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.404929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.404956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.405127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.405154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.405354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.405382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.405586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.405624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.405823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.405852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.406043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.406072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.915 qpair failed and we were unable to recover it. 00:34:33.915 [2024-07-15 20:40:12.406216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.915 [2024-07-15 20:40:12.406243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.406475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.406519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.406731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.406775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.406971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.406999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.407151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.407178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.407402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.407446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.407706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.407752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.407934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.407962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.408132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.408160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.408355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.408381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.408576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.408602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.408801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.408828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.408999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.409026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.409252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.409296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.409523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.409568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.409748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.409775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.409938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.409966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.410137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.410164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.410393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.410435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.410685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.410716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.410924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.410951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.411127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.411154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.411346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.411390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.411624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.411667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.411844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.411870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.412044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.412070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.412307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.412334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.412506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.412551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.412721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.412748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.412898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.412925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.413079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.413106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.413320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.413365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.413592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.413622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.413849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.413881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.414055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.414082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.414237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.414265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.414434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.414461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.414638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.916 [2024-07-15 20:40:12.414664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:33.916 qpair failed and we were unable to recover it. 00:34:33.916 [2024-07-15 20:40:12.414856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.414903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.415078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.415105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.415279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.415308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.415612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.415641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.415822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.415849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.416028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.416055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.416275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.416303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.416651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.416703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.416896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.416947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.417962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.417993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.418173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.418200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.418374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.418403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.418733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.418783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.418986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.419012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.419192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.419220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.419417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.419447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.419704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.419733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.419939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.419966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.420137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.420178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.420948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.420978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.421168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.421195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.421368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.421395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.421604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.421633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.421823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.421852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.422056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.422083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.422260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.422286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.422499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.422547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.422763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.422791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.422996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.423023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.423201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.423227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.423983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.424013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.424172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.424199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.424951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.424982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.425200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.425229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.425432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.425458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.425681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.425715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.425956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.425984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.426135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.426161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.426376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.426405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.426601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.917 [2024-07-15 20:40:12.426632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.917 qpair failed and we were unable to recover it. 00:34:33.917 [2024-07-15 20:40:12.426786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.426816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.427007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.427033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.427231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.427257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.427461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.427490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.427662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.427690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.427889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.427916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.428109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.428135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.428336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.428365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.428693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.428744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.428974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.429000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.429172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.429200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.429418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.429444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.429777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.429838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.430020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.430047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.430226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.430253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.430565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.430612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.430807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.430835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.431046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.431073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.431273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.431302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.431493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.431522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.431744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.431769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.431965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.431995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.432185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.432213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.432408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.432434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.432601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.432630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.432845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.432874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.433105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.433134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.433315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.433340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.433531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.433562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.433772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.433801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.433999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.434025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.434196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.434225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.434416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.434445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.434639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.434666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.434818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.434844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.435087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.435115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.435272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.435299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.435536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.435587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.435748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.435777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.435972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.435998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.436165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.918 [2024-07-15 20:40:12.436194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.918 qpair failed and we were unable to recover it. 00:34:33.918 [2024-07-15 20:40:12.436411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.919 [2024-07-15 20:40:12.436440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.919 qpair failed and we were unable to recover it. 00:34:33.919 [2024-07-15 20:40:12.436628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.919 [2024-07-15 20:40:12.436653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.919 qpair failed and we were unable to recover it. 00:34:33.919 [2024-07-15 20:40:12.436816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.919 [2024-07-15 20:40:12.436846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.919 qpair failed and we were unable to recover it. 00:34:33.919 [2024-07-15 20:40:12.437011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.919 [2024-07-15 20:40:12.437040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:33.919 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.437207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.437236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.437402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.437428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.437668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.437722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.437913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.437942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.438137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.438163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.438382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.438411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.438601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.438630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.438788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.438815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.438990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.439016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.439184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.439213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.439397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.439426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.439595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.439621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.439839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.439868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.440068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.440094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.440264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.440290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.440467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.440493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.440752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.440802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.441000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.441027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.441199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.441247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.441438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.441467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.441640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.441666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.441862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.441893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.442076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.442106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.442306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.442333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.442507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.442533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.442703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.442731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.442950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.442977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.443146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.443175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.443400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.443429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.443626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.443653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.443887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.443917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.444083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.444109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.444287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.444313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.444479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.444505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.194 [2024-07-15 20:40:12.444655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.194 [2024-07-15 20:40:12.444681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.194 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.444834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.444860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.445049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.445075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.445269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.445298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.445490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.445516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.445705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.445734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.445939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.445966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.446113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.446139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.446292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.446318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.446483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.446508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.446681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.446708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.446893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.446943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.447095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.447122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.447262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.447288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.447471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.447499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.447663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.447691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.447854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.447889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.448059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.448085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.448222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.448248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.448446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.448472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.448630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.448658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.448874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.449006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.449184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.449211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.449371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.449399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.449638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.449684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.449892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.449919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.450061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.450086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.450254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.450282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.450499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.450525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.450694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.450722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.450941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.450968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.451143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.451169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.451341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.451367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.451683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.451735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.451955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.451982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.452168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.452197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.452380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.452408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.452604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.452629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.452818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.452851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.453056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.453082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.453226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.195 [2024-07-15 20:40:12.453251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.195 qpair failed and we were unable to recover it. 00:34:34.195 [2024-07-15 20:40:12.453403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.453432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.453691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.453733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.454025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.454051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.454205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.454231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.454422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.454451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.454611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.454637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.454855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.454891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.455094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.455120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.455255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.455281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.455476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.455504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.455796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.455845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.456037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.456064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.456225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.456254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.456434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.456463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.456635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.456660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.456815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.456841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.457017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.457043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.457204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.457230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.457425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.457456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.457667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.457712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.457904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.457931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.458082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.458108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.458307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.458336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.458495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.458521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.458715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.458743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.458949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.458976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.459151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.459177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.459406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.459434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.459618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.459647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.459842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.459871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.460096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.460122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.460291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.460316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.460460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.460486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.460706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.460734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.460923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.460952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.461140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.461166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.461386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.461415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.461606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.461634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.461837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.461863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.462063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.462093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.196 [2024-07-15 20:40:12.462281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.196 [2024-07-15 20:40:12.462309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.196 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.462473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.462499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.462684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.462713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.462871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.462907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.463071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.463097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.463317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.463345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.463543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.463569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.463765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.463791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.463949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.463978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.464175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.464203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.464403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.464429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.464604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.464630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.464806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.464832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.464983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.465009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.465175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.465200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.465374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.465402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.465619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.465645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.465836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.465865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.466087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.466112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.466306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.466331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.466556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.466584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.466766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.466795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.466954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.466980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.467160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.467185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.467379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.467408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.467602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.467633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.467836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.467865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.468047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.468076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.468269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.468295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.468516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.468545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.468768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.468796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.469024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.469050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.469221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.469250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.469435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.469464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.469620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.469645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.469785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.469828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.470014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.470041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.470209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.470235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.470417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.470446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.470670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.470696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.470892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.470937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.471108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.197 [2024-07-15 20:40:12.471134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.197 qpair failed and we were unable to recover it. 00:34:34.197 [2024-07-15 20:40:12.471332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.471361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.471587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.471612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.471761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.471787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.471982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.472008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.472191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.472217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.472408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.472437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.472649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.472677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.472892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.472919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.473094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.473122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.473344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.473370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.473507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.473537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.473735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.473764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.473973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.474003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.474169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.474195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.474389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.474418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.474612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.474638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.474828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.474856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.475065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.475091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.475273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.475303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.475520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.475546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.475698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.475727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.475943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.475969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.476166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.476192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.476389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.476417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.476589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.476618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.476787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.476812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.476955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.476981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.477159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.477184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.477385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.477411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.477576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.477604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.477827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.477856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.478065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.478091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.478319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.478347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.478565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.198 [2024-07-15 20:40:12.478594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.198 qpair failed and we were unable to recover it. 00:34:34.198 [2024-07-15 20:40:12.478761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.478786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.478989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.479019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.479206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.479234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.479466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.479492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.479729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.479758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.479986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.480012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.480166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.480192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.480362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.480387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.480591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.480619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.480792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.480818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.480997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.481023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.481228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.481257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.481414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.481440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.481645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.481674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.481835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.481863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.482054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.482080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.482249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.482279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.482444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.482474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.482631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.482657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.482847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.482882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.483074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.483103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.483317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.483342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.483552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.483581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.483764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.483792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.483952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.483978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.484174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.484203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.484365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.484393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.484588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.484614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.484848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.484874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.485024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.485049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.485222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.485248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.485449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.485478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.485693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.485721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.485942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.485969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.486162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.486191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.486405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.486433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.486600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.486626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.486796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.486821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.486964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.486991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.487189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.487214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.487449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.199 [2024-07-15 20:40:12.487478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.199 qpair failed and we were unable to recover it. 00:34:34.199 [2024-07-15 20:40:12.487700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.487729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.487976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.488003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.488173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.488199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.488390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.488423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.488646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.488672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.488872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.488927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.489135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.489177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.489373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.489401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.489591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.489620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.489805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.489833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.490040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.490067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.490267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.490293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.490497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.490522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.490718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.490744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.490981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.491008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.491184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.491210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.491393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.491419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.491601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.491629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.491842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.491870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.492137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.492166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.492333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.492363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.492588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.492614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.492794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.492820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.493034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.493064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.493280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.493309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.493505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.493531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.493727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.493756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.493953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.493983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.494145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.494171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.494341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.494367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.494563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.494596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.494792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.494818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.494983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.495012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.495224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.495253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.495444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.495470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.495635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.495664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.495822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.495850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.496052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.496079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.496307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.496336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.496533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.496562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.496737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.200 [2024-07-15 20:40:12.496763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.200 qpair failed and we were unable to recover it. 00:34:34.200 [2024-07-15 20:40:12.496954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.496983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.497144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.497173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.497367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.497392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.497589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.497618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.497788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.497817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.498015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.498042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.498241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.498270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.498480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.498509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.498725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.498751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.498961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.498988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.499131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.499173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.499368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.499394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.499593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.499622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.499775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.499804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.499972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.499999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.500164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.500207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.500398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.500432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.500623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.500649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.500846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.500875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.501072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.501101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.501290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.501316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.501522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.501551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.501708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.501736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.501906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.501933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.502101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.502130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.502324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.502353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.502543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.502569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.502718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.502744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.502957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.502987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.503190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.503216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.503421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.503450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.503634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.503663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.503867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.503899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.504095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.504124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.504319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.504347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.504539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.504564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.504750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.504778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.504973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.505003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.505195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.505222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.505416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.505444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.505633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.201 [2024-07-15 20:40:12.505662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.201 qpair failed and we were unable to recover it. 00:34:34.201 [2024-07-15 20:40:12.505855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.505887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.506084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.506114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.506286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.506315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.506535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.506564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.506731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.506757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.506915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.506945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.507116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.507145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.507304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.507333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.507522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.507547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.507736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.507764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.507962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.507992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.508175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.508203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.508373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.508399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.508596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.508623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.508857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.508891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.509082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.509108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.509284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.509313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.509591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.509651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.509872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.509911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.510133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.510159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.510361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.510386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.511116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.511149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.511346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.511376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.511576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.511605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.511820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.511845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.512051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.512080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.512264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.512293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.512519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.512548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.512750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.512776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.512922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.512949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.513145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.513174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.513388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.513416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.513609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.513635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.513860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.513897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.514070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.514099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.514273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.202 [2024-07-15 20:40:12.514299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.202 qpair failed and we were unable to recover it. 00:34:34.202 [2024-07-15 20:40:12.514494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.514520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.514725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.514754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.514935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.514966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.515186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.515212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.515413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.515440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.515632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.515661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.515846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.515874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.516064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.516097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.516296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.516323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.516604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.516660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.516862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.516899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.517098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.517124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.517300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.517326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.517556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.517601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.517813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.517842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.518073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.518099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.518280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.518306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.518543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.518593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.518781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.518810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.519027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.519056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.519230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.519256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.519435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.519462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.519632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.519661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.519825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.519854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.520054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.520080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.520270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.520299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.520519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.520548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.520760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.520789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.520958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.520984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.521151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.521193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.521345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.521373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.521596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.521624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.521797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.521823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.522017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.522046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.522237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.522270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.522433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.522459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.522658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.522684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.522846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.522893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.523085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.523114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.523307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.523338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.203 [2024-07-15 20:40:12.523529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.203 [2024-07-15 20:40:12.523555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.203 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.523723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.523751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.523933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.523963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.524151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.524179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.524381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.524407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.524602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.524649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.524818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.524847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.525022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.525050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.525253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.525280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.525517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.525572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.525759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.525788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.526004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.526034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.526193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.526219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.526360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.526403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.526565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.526594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.526782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.526811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.527002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.527029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.527224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.527253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.527418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.527446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.527657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.527686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.527908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.527935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.528137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.528165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.528360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.528389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.528581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.528610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.528803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.528829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.529037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.529066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.529293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.529322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.529503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.529532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.529726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.529752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.529953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.529983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.530169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.530198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.530411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.530440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.530636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.530662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.530857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.530894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.531097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.531126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.531323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.531352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.531513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.531539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.531727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.531756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.531967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.531997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.532181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.532209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.532403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.532429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.204 qpair failed and we were unable to recover it. 00:34:34.204 [2024-07-15 20:40:12.532643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.204 [2024-07-15 20:40:12.532695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.532890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.532919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.533083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.533112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.533276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.533303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.533444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.533488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.533679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.533708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.533871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.533908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.534066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.534092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.534289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.534318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.534532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.534561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.534742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.534771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.534968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.534995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.535148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.535174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.535376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.535404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.535564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.535593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.535811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.535837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.536077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.536106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.536299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.536328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.536544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.536573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.536756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.536782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.536941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.536971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.537125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.537158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.537314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.537343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.537540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.537566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.537742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.537768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.537931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.537958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.538152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.538180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.538361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.538387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.538531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.538557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.538777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.538806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.539029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.539059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.539274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.539301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.539578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.539605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.539759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.539785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.540008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.540038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.540212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.540238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.540431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.540481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.540649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.540677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.540867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.540904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.541095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.541121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.541359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.541406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.205 [2024-07-15 20:40:12.541597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.205 [2024-07-15 20:40:12.541627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.205 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.541819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.541848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.542046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.542072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.542340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.542393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.542611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.542640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.542832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.542861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.543048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.543074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.543250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.543280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.543454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.543480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.543648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.543676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.543861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.543895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.544049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.544075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.544264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.544292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.544487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.544514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.544711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.544737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.544965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.544995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.545213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.545242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.545404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.545433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.545629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.545655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.545852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.545890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.546047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.546077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.546248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.546277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.546464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.546490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.546643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.546669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.546851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.546896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.547115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.547145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.547310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.547335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.547509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.547535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.547763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.547792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.547984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.548013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.548205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.548231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.548407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.548432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.548624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.548653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.548838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.548866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.549097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.549127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.549426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.549500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.549714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.549743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.549956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.549985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.550142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.550172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.550349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.550375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.550571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.206 [2024-07-15 20:40:12.550599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.206 qpair failed and we were unable to recover it. 00:34:34.206 [2024-07-15 20:40:12.550791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.550820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.551041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.551067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.551321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.551371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.551569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.551597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.551812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.551840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.552017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.552043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.552233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.552321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.552542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.552570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.552764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.552793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.552989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.553015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.553189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.553218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.553372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.553400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.553598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.553624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.553766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.553791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.553954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.553981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.554144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.554181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.554398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.554427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.554596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.554622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.554852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.554905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.555078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.555106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.555297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.555325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.555517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.555543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.555732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.555760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.555929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.555958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.556181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.556206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.556375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.556401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.556588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.556616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.556806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.556845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.557053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.557080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.557251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.557277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.557459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.557487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.557650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.557678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.557839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.557885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.558086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.558112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.558317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.558349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.558513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.558542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.207 [2024-07-15 20:40:12.558726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.207 [2024-07-15 20:40:12.558754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.207 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.558946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.558972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.559253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.559320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.559489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.559520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.559707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.559735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.559927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.559953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.560154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.560185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.560381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.560409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.560579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.560608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.560798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.560824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.561027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.561057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.561219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.561248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.561439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.561467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.561667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.561692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.561860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.561894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.562083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.562111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.562335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.562361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.562540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.562566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.562787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.562826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.563042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.563072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.563234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.563274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.563468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.563497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.563669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.563695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.563897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.563926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.564091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.564120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.564314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.564343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.564519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.564545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.564679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.564723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.564946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.564975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.565148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.565181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.565357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.565383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.565527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.565554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.565779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.565807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.565989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.566016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.566165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.566191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.566381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.566410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.566576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.566605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.566777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.566803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.566960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.566987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.567170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.567199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.567390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.208 [2024-07-15 20:40:12.567419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.208 qpair failed and we were unable to recover it. 00:34:34.208 [2024-07-15 20:40:12.567628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.567654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.567811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.567840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.568038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.568067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.568250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.568279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.568491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.568517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.568681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.568711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.568936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.568966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.569146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.569175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.569369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.569395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.569668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.569719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.569906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.569935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.570149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.570182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.570373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.570399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.570608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.570637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.570818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.570846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.571019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.571046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.571228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.571254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.571487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.571537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.571732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.571757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.571977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.572006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.572215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.572241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.572393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.572419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.572590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.572618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.572831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.572859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.573064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.573091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.573280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.573351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.573534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.573562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.573728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.573757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.573949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.573975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.574125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.574151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.574341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.574369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.574537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.574566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.574762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.574787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.574985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.575026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.575220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.575249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.575468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.575496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.575686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.575711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.575902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.575935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.576131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.576158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.576390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.576419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.576614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.209 [2024-07-15 20:40:12.576640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.209 qpair failed and we were unable to recover it. 00:34:34.209 [2024-07-15 20:40:12.576834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.576873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.577042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.577070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.577286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.577314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.577478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.577504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.577685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.577739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.577900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.577930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.578095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.578121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.578303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.578329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.578535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.578563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.578721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.578749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.578928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.578957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.579181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.579207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.579431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.579459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.579675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.579703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.579896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.579926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.580116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.580142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.580417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.580469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.580657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.580685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.580871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.580980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.581201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.581227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.581578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.581627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.581824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.581852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.582035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.582062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.582236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.582261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.582584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.582640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.582844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.582896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.583095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.583124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.583316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.583342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.583561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.583617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.583782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.583810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.584030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.584060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.584219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.584245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.584488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.584542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.584769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.584798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.584996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.585024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.585175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.585201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.585366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.585395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.585611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.585639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.585844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.585890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.586065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.586091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.210 qpair failed and we were unable to recover it. 00:34:34.210 [2024-07-15 20:40:12.586265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.210 [2024-07-15 20:40:12.586291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.586488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.586516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.586715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.586741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.586950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.586977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.587140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.587168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.587386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.587414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.587628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.587657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.587842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.587867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.588067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.588096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.588314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.588343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.588542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.588581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.588781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.588806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.588979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.589008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.589201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.589230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.589417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.589446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.589667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.589693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.589893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.589923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.590082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.590110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.590302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.590330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.590519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.590554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.590840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.590915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.591108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.591137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.591363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.591391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.591585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.591611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.591827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.591855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.592045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.592078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.592279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.592307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.592506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.592533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.592838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.592916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.593135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.593164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.593369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.593394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.593593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.593619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.593849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.593895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.594088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.594116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.594304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.594333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.211 [2024-07-15 20:40:12.594527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.211 [2024-07-15 20:40:12.594552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.211 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.594842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.594916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.595122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.595150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.595381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.595410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.595638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.595664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.595890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.595919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.596068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.596096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.596305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.596334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.596531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.596559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.596821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.596891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.597065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.597093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.597269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.597298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.597490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.597516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.597735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.597763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.597954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.597984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.598147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.598180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.598376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.598402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.598577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.598609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.598780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.598807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.598975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.599001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.599153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.599189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.599348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.599377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.599571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.599600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.599757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.599785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.599992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.600019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.600264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.600316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.600510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.600539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.600702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.600731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.600903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.600930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.601127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.601155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.601340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.601369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.601564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.601593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.601810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.601836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.602114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.602178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.602405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.602431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.602623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.602652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.602849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.602891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.603091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.603120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.603319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.603348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.603530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.603559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.603726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.603752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.603901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.212 [2024-07-15 20:40:12.603945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.212 qpair failed and we were unable to recover it. 00:34:34.212 [2024-07-15 20:40:12.604132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.604161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.604384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.604413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.604604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.604630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.604860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.604901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.605128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.605153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.605293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.605319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.605496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.605522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.605718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.605747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.605932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.605962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.606146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.606187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.606407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.606433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.606689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.606739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.606929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.606959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.607161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.607189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.607412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.607438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.607674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.607721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.607935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.607969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.608161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.608190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.608381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.608407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.608627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.608656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.608881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.608911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.609102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.609130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.609324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.609350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.609546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.609575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.609778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.609804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.610007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.610034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.610241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.610267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.610404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.610430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.610577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.610603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.610749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.610775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.610976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.611003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.611353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.611411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.611624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.611653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.611845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.611889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.612089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.612116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.612427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.612485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.612705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.612734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.612923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.612953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.613149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.613179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.613375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.613401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.613633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.213 [2024-07-15 20:40:12.613672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.213 qpair failed and we were unable to recover it. 00:34:34.213 [2024-07-15 20:40:12.613831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.613873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.614058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.614085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.614237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.614266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.614484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.614513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.614675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.614705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.614905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.614931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.615097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.615127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.615319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.615348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.615536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.615565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.615760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.615786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.615971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.616001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.616218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.616247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.616434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.616462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.616679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.616705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.616934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.616963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.617181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.617210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.617431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.617460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.617671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.617697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.617898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.617928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.618120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.618149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.618365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.618394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.618602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.618628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.618849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.618888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.619080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.619109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.619308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.619337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.619525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.619551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.619832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.619900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.620115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.620143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.620340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.620369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.620594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.620624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.620821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.620850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.621059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.621087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.621259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.621288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.621480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.621506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.621692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.621744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.621944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.621971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.622133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.622185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.622406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.622432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.622628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.622657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.622837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.622893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.623082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.214 [2024-07-15 20:40:12.623110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.214 qpair failed and we were unable to recover it. 00:34:34.214 [2024-07-15 20:40:12.623337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.623362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.623601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.623654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.623820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.623851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.624059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.624089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.624287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.624313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.624601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.624660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.624872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.624907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.625074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.625103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.625298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.625324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.625617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.625674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.625865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.625902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.626097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.626123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.626261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.626287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.626527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.626583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.626772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.626801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.626998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.627024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.627203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.627229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.627497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.627525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.627710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.627739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.627939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.627968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.628136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.628162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.628412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.628462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.628649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.628678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.628855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.628890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.629109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.629135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.629359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.629387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.629611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.629640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.629852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.629887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.630057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.630083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.630262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.630291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.630462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.630491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.630643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.630672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.630851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.630902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.631104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.631132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.215 qpair failed and we were unable to recover it. 00:34:34.215 [2024-07-15 20:40:12.631298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.215 [2024-07-15 20:40:12.631327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.631493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.631522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.631679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.631705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.631909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.631939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.632105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.632134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.632327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.632363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.632524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.632550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.632710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.632739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.632963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.633003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.633196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.633236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.633410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.633436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.633606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.633632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.633814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.633843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.634007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.634033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.634241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.634267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.634614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.634677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.634864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.634901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.635110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.635150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.635319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.635345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.635543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.635595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.635782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.635811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.635995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.636024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.636182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.636212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.636383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.636412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.636572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.636600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.636791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.636817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.637006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.637032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.637298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.637348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.637542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.637570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.637727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.637756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.637924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.637951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.638130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.638159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.638345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.638373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.638544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.638573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.638759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.638785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.638946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.638976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.639137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.639176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.639357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.639393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.639587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.639613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.639812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.639841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.640013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.216 [2024-07-15 20:40:12.640043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.216 qpair failed and we were unable to recover it. 00:34:34.216 [2024-07-15 20:40:12.640222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.640251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.640421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.640447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.640633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.640661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.640826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.640855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.641051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.641079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.641250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.641276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.641443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.641469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.641698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.641726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.641920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.641954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.642176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.642203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.642428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.642461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.642606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.642632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.642841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.642889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.643093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.643118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.643270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.643299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.643488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.643517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.643709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.643735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.643882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.643909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.644099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.644128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.644309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.644337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.644524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.644553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.644740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.644770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.644970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.644999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.645205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.645234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.645455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.645483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.645649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.645675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.645862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.645902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.646066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.646094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.646288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.646323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.646546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.646572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.646791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.646819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.647026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.647052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.647222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.647248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.647425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.647451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.647768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.647824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.648022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.648051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.648256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.648285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.648501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.648527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.648827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.648919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.217 qpair failed and we were unable to recover it. 00:34:34.217 [2024-07-15 20:40:12.649118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.217 [2024-07-15 20:40:12.649144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.649338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.649366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.649594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.649620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.649852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.649893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.650062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.650091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.650253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.650282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.650510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.650547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.650782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.650808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.650959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.650987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.651155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.651184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.651399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.651425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.651626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.651654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.651852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.651897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.652090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.652116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.652293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.652318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.652650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.652701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.652898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.652927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.653127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.653152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.653333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.653370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.653536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.653565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.653777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.653803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.653975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.654001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.654198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.654224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.654452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.654480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.654685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.654711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.654902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.654931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.655126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.655151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.655345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.655374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.655564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.655594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.655784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.655813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.655993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.656030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.656167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.656193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.656398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.656427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.656590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.656620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.656845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.656892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.657120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.657147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.657347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.657373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.657582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.657615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.657811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.657836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.658036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.658065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.658267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.218 [2024-07-15 20:40:12.658295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.218 qpair failed and we were unable to recover it. 00:34:34.218 [2024-07-15 20:40:12.658634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.658666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.658902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.658929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.659105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.659133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.659323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.659352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.659546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.659572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.659741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.659767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.659972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.660001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.660190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.660219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.660438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.660466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.660662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.660687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.660889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.660918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.661085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.661125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.661277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.661306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.661528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.661554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.661747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.661775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.661945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.661974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.662192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.662233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.662395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.662421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.662611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.662640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.662872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.662906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.663081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.663107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.663354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.663380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.663549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.663640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.663886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.663916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.664093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.664119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.664302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.664328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.664553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.664599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.664790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.664818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.665029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.665060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.665241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.665267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.665535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.665590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.665785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.665814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.666004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.666034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.666227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.666255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.666442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.666470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.666630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.666659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.666839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.666886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.667053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.667078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.667241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.667270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.667458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.667487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.219 [2024-07-15 20:40:12.667702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.219 [2024-07-15 20:40:12.667731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.219 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.667924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.667951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.668108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.668148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.668321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.668349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.668518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.668547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.668725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.668751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.668897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.668924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.669118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.669146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.669365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.669394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.669587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.669613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.669802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.669835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.670028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.670057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.670245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.670274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.670487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.670513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.670711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.670740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.670954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.670983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.671200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.671229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.671401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.671427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.671666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.671720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.671934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.671963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.672152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.672181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.672356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.672383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.672543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.672612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.672806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.672831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.673025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.673052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.673222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.673248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.673480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.220 [2024-07-15 20:40:12.673530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.220 qpair failed and we were unable to recover it. 00:34:34.220 [2024-07-15 20:40:12.673747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.673773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.673945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.673974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.674194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.674219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.674496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.674555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.674770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.674796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.674990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.675020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.675196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.675221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.675418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.675446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.675638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.675666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.675833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.675862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.676045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.676071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.676281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.676310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.676465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.676495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.676685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.676714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.676899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.676939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.677138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.677177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.677398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.677426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.677586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.677621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.677815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.677840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.678030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.678056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.678222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.678250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.678439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.678468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.678648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.678673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.678847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.678888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.679122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.221 [2024-07-15 20:40:12.679151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.221 qpair failed and we were unable to recover it. 00:34:34.221 [2024-07-15 20:40:12.679374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.679403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.679594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.679619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.679818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.679846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.680067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.680096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.680294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.680323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.680511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.680537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.680795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.680844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.681069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.681096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.681292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.681321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.681557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.681583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.681762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.681790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.682010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.682039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.682234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.682262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.682472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.682498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.682640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.682665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.682865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.682902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.683100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.683128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.683317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.683344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.683530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.683585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.683736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.683767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.683971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.683997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.684142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.684168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.684381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.684409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.684611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.684641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.684863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.222 [2024-07-15 20:40:12.684916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.222 qpair failed and we were unable to recover it. 00:34:34.222 [2024-07-15 20:40:12.685121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.685147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.685450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.685517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.685909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.685962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.686173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.686208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.686368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.686400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.686568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.686596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.686790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.686819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.686999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.687028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.687193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.687220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.687528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.687590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.687787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.687812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.688006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.688036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.688211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.688236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.688486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.688514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.688716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.688744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.688938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.688968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.689192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.689228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.689482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.689511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.689731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.689759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.689989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.690018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.690221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.690258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.690594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.690654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.690890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.690920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.691141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.691180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.691377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.691403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.691554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.691580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.223 qpair failed and we were unable to recover it. 00:34:34.223 [2024-07-15 20:40:12.691777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.223 [2024-07-15 20:40:12.691803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.692030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.692057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.692207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.692237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.692433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.692461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.692657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.692686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.692870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.692917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.693089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.693114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.693387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.693440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.693626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.693654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.693838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.693886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.694056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.694082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.694263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.694290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.694524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.694550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.694696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.694722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.694901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.694928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.695105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.695131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.695317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.695346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.695564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.695592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.695764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.695799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.695969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.695996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.696147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.696176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.696396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.696425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.696625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.696652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.696854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.696898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.697066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.697096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.697307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.697341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.224 [2024-07-15 20:40:12.697511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.224 [2024-07-15 20:40:12.697538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.224 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.697738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.697766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.697956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.697986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.698198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.698236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.698471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.698497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.698698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.698727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.698926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.698953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.699142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.699171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.699341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.699366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.699536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.699562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.699707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.699733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.699867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.699915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.700121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.700147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.700350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.700378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.700570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.700606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.700798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.700827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.701043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.701070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.701243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.701272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.701458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.701487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.701645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.701674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.701868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.701902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.702097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.702125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.702347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.702373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.702567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.702596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.702767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.702794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.702963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.225 [2024-07-15 20:40:12.702992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.225 qpair failed and we were unable to recover it. 00:34:34.225 [2024-07-15 20:40:12.703165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.703190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.703340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.703394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.703582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.703608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.703792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.703820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.704036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.704065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.704282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.704311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.704506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.704535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.704751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.704780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.704967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.704996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.705156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.705184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.705355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.705382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.705613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.705663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.705851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.705889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.706107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.706135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.226 [2024-07-15 20:40:12.706311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.226 [2024-07-15 20:40:12.706340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.226 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.706567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.706596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.706786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.706814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.706979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.707009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.707172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.707211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.707356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.707409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.707579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.707607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.707834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.707862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.708048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.708074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.708258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.708284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.708444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.708469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.708664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.708693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.708885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.708912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.709105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.709134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.709370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.709398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.709624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.709652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.709823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.709848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.709995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.710037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.710215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.710243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.710438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.710463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.710607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.710633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.710781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.710807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.710958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.710984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.711175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.711204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.711391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.711416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.711597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.711627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.711781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.711809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.712026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.712055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.712228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.712255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.712443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.712475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.712666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.712694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.712917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.502 [2024-07-15 20:40:12.712951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.502 qpair failed and we were unable to recover it. 00:34:34.502 [2024-07-15 20:40:12.713138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.713164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.713505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.713566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.713752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.713792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.713989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.714018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.714217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.714242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.714433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.714463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.714647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.714676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.714832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.714862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.715064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.715090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.715374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.715437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.715652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.715680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.715870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.715913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.716087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.716113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.716321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.716350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.716571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.716600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.716800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.716828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.717032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.717059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.717295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.717358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.717545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.717574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.717751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.717777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.717991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.718017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.718284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.718336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.718548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.718576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.718787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.718816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.719033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.719059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.719332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.719390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.719584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.719617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.719807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.719837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.720021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.720047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.720233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.720262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.720452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.720491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.720674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.720709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.720891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.720918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.721078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.721107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.721332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.721360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.721575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.721612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.721800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.721829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.722022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.503 [2024-07-15 20:40:12.722051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.503 qpair failed and we were unable to recover it. 00:34:34.503 [2024-07-15 20:40:12.722209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.722237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.722397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.722426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.722627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.722653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.722857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.722893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.723117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.723145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.723339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.723367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.723575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.723601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.723760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.723788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.723989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.724019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.724197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.724225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.724445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.724470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.724736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.724787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.725010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.725039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.725203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.725231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.725418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.725444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.725686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.725748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.725967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.725994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.726137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.726179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.726371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.726398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.726589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.726618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.726765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.726794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.726971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.727000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.727225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.727251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.727423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.727452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.727674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.727700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.727892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.727921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.728099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.728125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.728325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.728377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.728544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.728572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.728743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.728782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.728981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.729007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.729175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.729203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.729403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.729431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.729606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.729634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.729856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.729900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.730076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.730105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.730303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.730331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.730562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.730588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.730768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.504 [2024-07-15 20:40:12.730794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.504 qpair failed and we were unable to recover it. 00:34:34.504 [2024-07-15 20:40:12.730987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.731016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.731188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.731217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.731432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.731461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.731667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.731692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.731895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.731935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.732130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.732165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.732327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.732355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.732561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.732587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.732733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.732758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.732938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.732968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.733168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.733197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.733416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.733442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.733740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.733802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.734002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.734031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.734198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.734226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.734393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.734419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.734556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.734599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.734760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.734792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.734986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.735016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.735207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.735232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.735425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.735453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.735626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.735655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.735845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.735873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.736099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.736125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.736423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.736452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.736646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.736674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.736840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.736895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.737110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.737136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.737290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.737316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.737466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.737492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.737726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.737763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.737949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.737977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.738136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.738162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.738335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.738361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.738534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.738563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.738757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.738783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.738945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.738975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.739123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.739152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.739346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.739375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.739595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.739621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.739788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.505 [2024-07-15 20:40:12.739816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.505 qpair failed and we were unable to recover it. 00:34:34.505 [2024-07-15 20:40:12.740000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.740029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.740195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.740223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.740395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.740421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.740566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.740596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.740771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.740798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.740968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.740998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.741192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.741218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.741499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.741554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.741749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.741777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.741942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.741972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.742174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.742200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.742398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.742426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.742585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.742613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.742833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.742862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.743049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.743075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.743260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.743289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.743500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.743529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.743726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.743755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.743963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.743989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.744191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.744219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.744382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.744411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.744605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.744633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.744818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.744847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.745054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.745090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.745253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.745283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.745472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.745501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.745693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.745719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.745947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.746007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.746198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.746227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.746415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.746444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.746613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.746639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.746839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.746868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.747043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.747072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.747289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.747328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.506 [2024-07-15 20:40:12.747527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.506 [2024-07-15 20:40:12.747553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.506 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.747755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.747783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.747966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.747995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.748185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.748213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.748427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.748453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.748649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.748678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.748894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.748923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.749071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.749100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.749287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.749313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.749498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.749584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.749788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.749816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.750036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.750062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.750242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.750267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.750605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.750658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.750848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.750885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.751070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.751099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.751326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.751351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.751684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.751762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.751986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.752015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.752206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.752234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.752426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.752452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.752622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.752647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.752821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.752846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.753066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.753092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.753293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.753319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.753519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.753547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.753736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.753764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.753959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.753990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.754172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.754198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.754476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.754526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.754750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.754778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.754977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.755007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.755203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.755229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.755581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.755637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.755827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.755855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.756034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.756063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.756223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.756248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.756466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.756499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.756689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.756718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.756874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.756917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.757115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.757141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.757305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.507 [2024-07-15 20:40:12.757332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.507 qpair failed and we were unable to recover it. 00:34:34.507 [2024-07-15 20:40:12.757547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.757575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.757792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.757820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.758023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.758050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.758249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.758277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.758467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.758495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.758658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.758686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.758887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.758913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.759080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.759108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.759308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.759336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.759508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.759537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.759730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.759757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.759969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.759995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.760140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.760177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.760412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.760440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.760638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.760663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.760856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.760902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.761084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.761112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.761310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.761338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.761527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.761553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.761745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.761774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.761999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.762028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.762220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.762248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.762448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.762477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.762676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.762705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.762921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.762950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.763137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.763172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.763392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.763417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.763654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.763704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.763887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.763917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.764095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.764123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.764317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.764343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.764497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.764523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.764742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.764770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.765000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.765026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.765176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.765202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.765392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.765421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.765616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.765644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.765799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.765827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.766015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.766041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.766245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.766296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.766517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.766545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.508 qpair failed and we were unable to recover it. 00:34:34.508 [2024-07-15 20:40:12.766735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.508 [2024-07-15 20:40:12.766764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.766967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.766994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.767215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.767273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.767495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.767524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.767743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.767771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.767946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.767973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.768122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.768147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.768325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.768351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.768544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.768576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.768742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.768768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.768965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.768994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.769164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.769197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.769410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.769439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.769602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.769628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.769825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.769851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.770096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.770125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.770327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.770352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.770520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.770546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.770739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.770767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.770930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.770959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.771121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.771149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.771347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.771372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.771568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.771596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.771750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.771779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.772012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.772039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.772217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.772243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.772543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.772592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.772775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.772803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.772995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.773024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.773206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.773232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.773422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.773450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.773608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.773637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.773795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.773824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.774002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.774028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.774176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.774217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.774371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.774400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.774593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.774622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.774810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.774836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.775003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.775032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.775215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.775243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.775405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.775433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.509 qpair failed and we were unable to recover it. 00:34:34.509 [2024-07-15 20:40:12.775596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.509 [2024-07-15 20:40:12.775622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.775802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.775830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.776017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.776046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.776236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.776265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.776465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.776490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.776684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.776713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.776937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.776966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.777144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.777172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.777378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.777404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.777596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.777624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.777814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.777843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.778049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.778076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.778251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.778277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.778442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.778471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.778691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.778720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.778888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.778917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.779134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.779160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.779395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.779450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.779672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.779700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.779862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.779898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.780093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.780119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.780276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.780304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.780474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.780503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.780660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.780689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.780900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.780927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.781092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.781118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.781318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.781348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.781544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.781572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.781738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.781764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.781949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.781979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.782167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.782193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.782339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.782365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.782532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.782558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.782698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.782724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.782943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.782973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.783158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.510 [2024-07-15 20:40:12.783190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.510 qpair failed and we were unable to recover it. 00:34:34.510 [2024-07-15 20:40:12.783382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.783408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.783637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.783666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.783863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.783895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.784089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.784117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.784331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.784356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.784707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.784768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.784982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.785011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.785197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.785226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.785416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.785442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.785722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.785788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.786010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.786039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.786201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.786229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.786420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.786445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.786594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.786620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.786825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.786851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.787068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.787096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.787314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.787339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.787525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.787553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.787761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.787789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.787980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.788009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.788183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.788209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.788425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.788453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.788619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.788647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.788841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.788867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.789078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.789103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.789357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.789385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.789600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.789632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.789855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.789890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.790094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.790120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.790461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.790508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.790703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.790731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.790942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.790971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.791163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.791189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.791431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.791484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.791662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.791691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.791903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.791932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.792103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.792128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.792297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.792322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.792463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.792488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.792692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.792720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.792921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.792948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.511 [2024-07-15 20:40:12.793096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.511 [2024-07-15 20:40:12.793121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.511 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.793338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.793366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.793550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.793579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.793742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.793767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.793986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.794016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.794211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.794240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.794399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.794427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.794595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.794622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.794812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.794841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.795048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.795077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.795225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.795254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.795440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.795465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.795698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.795726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.795941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.795971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.796166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.796194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.796393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.796418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.796593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.796619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.796813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.796842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.797017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.797043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.797217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.797243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.797531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.797583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.797750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.797778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.798000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.798026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.798204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.798230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.798495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.798545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.798761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.798789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.798978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.799008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.799203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.799228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.799419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.799448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.799644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.799673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.799863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.799897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.800083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.800109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.800429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.800492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.800682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.800710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.800901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.800931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.801138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.801163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.801504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.801565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.801896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.801955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.802142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.802171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.802394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.802420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.802686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.802736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.802955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.512 [2024-07-15 20:40:12.802984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.512 qpair failed and we were unable to recover it. 00:34:34.512 [2024-07-15 20:40:12.803175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.803203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.803361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.803387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.803680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.803745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.803974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.804003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.804191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.804220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.804415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.804441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.804663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.804710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.804908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.804937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.805099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.805128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.805340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.805365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.805624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.805676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.805891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.805925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.806115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.806143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.806354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.806380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.806550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.806578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.806793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.806821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.807061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.807088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.807261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.807287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.807442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.807468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.807619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.807644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.807810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.807838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.808026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.808052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.808254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.808282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.808471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.808499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.808707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.808736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.808897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.808924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.809069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.809094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.809306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.809335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.809529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.809557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.809752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.809778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.810001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.810030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.810230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.810258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.810420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.810448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.810646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.810671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.810842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.810868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.811103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.811133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.811323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.811352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.811572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.811598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.811792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.811825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.812017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.812046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.812203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.812232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.513 qpair failed and we were unable to recover it. 00:34:34.513 [2024-07-15 20:40:12.812425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.513 [2024-07-15 20:40:12.812452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.812624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.812711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.812869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.812915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.813117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.813143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.813336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.813361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.813605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.813633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.813825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.813854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.814085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.814111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.814311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.814337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.814560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.814588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.814792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.814820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.815015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.815044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.815266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.815292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.815596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.815655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.815871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.815906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.816075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.816104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.816324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.816350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.816539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.816568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.816721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.816757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.816970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.817001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.817204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.817230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.817493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.817547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.817760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.817788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.818010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.818036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.818177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.818207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.818394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.818423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.818600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.818628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.818843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.818871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.819105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.819131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.819281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.819307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.819452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.819495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.819700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.819729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.819921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.819948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.820102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.820128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.820301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.820327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.820497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.820526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.820723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.820749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.820918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.820945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.821144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.821173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.821338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.821368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.821569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.821595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.514 [2024-07-15 20:40:12.821790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.514 [2024-07-15 20:40:12.821816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.514 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.822020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.822049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.822238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.822266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.822451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.822477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.822631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.822657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.822825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.822850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.823046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.823072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.823244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.823269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.823431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.823456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.823666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.823694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.823888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.823917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.824139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.824165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.824358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.824386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.824610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.824638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.824860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.824898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.825068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.825094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.825262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.825290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.825472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.825501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.825692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.825720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.825886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.825912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.826058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.826099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.826281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.826309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.826522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.826550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.826735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.826760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.827053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.827118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.827317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.827346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.827545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.827574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.827770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.827797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.827969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.827999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.828158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.828187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.828367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.828395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.515 [2024-07-15 20:40:12.828585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.515 [2024-07-15 20:40:12.828611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.515 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.828763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.828791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.829007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.829036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.829220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.829248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.829444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.829470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.829692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.829721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.829914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.829943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.830168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.830193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.830364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.830390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.830730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.830787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.831012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.831041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.831213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.831241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.831439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.831464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.831643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.831694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.831888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.831917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.832087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.832115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.832269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.832295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.832546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.832599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.832809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.832837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.833053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.833080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.833219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.833249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.833507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.833560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.833788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.833814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.834043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.834072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.834247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.834273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.834527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.834579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.834811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.834839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.835013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.835040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.835210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.835236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.835519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.835579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.835802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.835831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.836034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.836061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.836236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.836261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.836564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.836624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.836818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.836847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.837074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.837100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.837276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.837302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.837576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.837628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.837846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.837872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.838083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.838111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.838281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.838306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.516 [2024-07-15 20:40:12.838484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.516 [2024-07-15 20:40:12.838510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.516 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.838659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.838685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.838886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.838915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.839104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.839130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.839390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.839439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.839621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.839649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.839803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.839835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.840040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.840066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.840278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.840304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.840449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.840475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.840699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.840728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.840920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.840955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.841129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.841155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.841329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.841358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.841551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.841579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.841789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.841815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.842070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.842099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.842311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.842339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.842539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.842567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.842761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.842788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.842988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.843018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.843245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.843273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.843461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.843489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.843678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.843704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.843882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.843913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.844107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.844135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.844324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.844354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.844520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.844546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.844714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.844740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.844937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.844964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.845200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.845228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.845418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.845444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.845704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.845733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.845940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.845969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.846191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.846220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.846415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.846440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.846676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.846726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.846943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.846972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.847178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.847203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.847372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.847398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.847674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.847723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.847912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.847941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.517 qpair failed and we were unable to recover it. 00:34:34.517 [2024-07-15 20:40:12.848130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.517 [2024-07-15 20:40:12.848159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.848374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.848400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.848627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.848652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.848818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.848844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.849000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.849028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.849219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.849260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.849445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.849474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.849676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.849720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.849893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.849921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.850094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.850121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.850292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.850318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.850495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.850538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.850767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.850810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.850988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.851017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.851240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.851269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.851469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.851497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.851662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.851690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.851917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.851943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.852097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.852123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.852360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.852389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.852592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.852621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.852852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.852896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.853116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.853141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.853369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.853398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.853590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.853619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.853804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.853833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.854032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.854058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.854249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.854277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.854453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.854481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.854757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.854818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.855011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.855038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.855204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.855232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.855446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.855474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.855850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.855946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.856142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.856184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.856393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.856422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.856616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.856645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.856864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.856900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.857094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.857119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.857290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.857318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.857510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.857538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.518 [2024-07-15 20:40:12.857870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.518 [2024-07-15 20:40:12.857935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.518 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.858149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.858175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.858348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.858376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.858589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.858617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.858831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.858860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.859074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.859100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.859318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.859347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.859567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.859595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.859787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.859815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.859991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.860018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.860197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.860225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.860395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.860424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.860634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.860663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.860850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.860890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.861107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.861133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.861336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.861364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.861592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.861617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.861811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.861839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.862009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.862039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.862236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.862265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.862473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.862499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.862692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.862721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.862965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.862991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.863212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.863241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.863427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.863452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.863605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.863648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.863815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.863844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.864043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.864070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.864207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.864233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.864467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.864523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.864748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.864777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.864982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.865010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.865212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.865239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.865545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.865607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.865807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.865835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.866046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.519 [2024-07-15 20:40:12.866072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.519 qpair failed and we were unable to recover it. 00:34:34.519 [2024-07-15 20:40:12.866216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.866243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.866395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.866423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.866603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.866633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.866785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.866821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.867007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.867033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.867251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.867280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.867493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.867522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.867746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.867776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.867975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.868002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.868313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.868370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.868565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.868595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.868763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.868800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.869036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.869063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.869239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.869266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.869451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.869489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.869670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.869697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.869935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.869962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.870243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.870305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.870518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.870548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.870737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.870767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.870963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.870990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.871208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.871238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.871398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.871438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.871647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.871675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.871818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.871843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.872015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.872042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.872234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.872264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.872455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.872484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.872701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.872727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.872966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.872995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.873143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.873183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.873349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.873378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.873572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.873598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.873756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.873785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.873948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.873978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.874144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.874184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.874406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.874436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.874790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.874841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.875040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.875067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.875207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.875233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.875419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.875446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.520 [2024-07-15 20:40:12.875800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.520 [2024-07-15 20:40:12.875851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.520 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.876060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.876089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.876312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.876339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.876488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.876514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.876759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.876817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.877043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.877073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.877261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.877291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.877491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.877517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.877679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.877708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.877947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.877977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.878193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.878221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.878418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.878444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.878710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.878739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.878921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.878951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.879143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.879179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.879368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.879394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.879590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.879619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.879785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.879813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.879971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.880001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.880195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.880221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.880404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.880431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.880624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.880651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.880850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.880894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.881082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.881108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.881315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.881344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.881538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.881567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.881726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.881755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.881947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.881973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.882171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.882210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.882366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.882395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.882544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.882573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.882767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.882794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.882961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.882990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.883175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.883204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.883397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.883426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.883614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.883640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.883794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.883835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.884021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.884050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.884235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.884264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.884464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.884491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.884683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.884712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.884903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.521 [2024-07-15 20:40:12.884941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.521 qpair failed and we were unable to recover it. 00:34:34.521 [2024-07-15 20:40:12.885124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.885153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.885344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.885370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.885539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.885565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.885712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.885739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.885911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.885939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.886111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.886138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.886400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.886453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.886642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.886672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.886840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.886870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.887074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.887100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.887320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.887349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.887562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.887591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.887756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.887784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.887982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.888009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.888269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.888323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.888517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.888546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.888739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.888768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.888963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.888990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.889157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.889244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.889462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.889491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.889679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.889708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.889903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.889936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.890083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.890108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.890306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.890333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.890534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.890564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.890762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.890789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.891025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.891052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.891223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.891249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.891450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.891479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.891641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.891668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.891890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.891919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.892088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.892117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.892282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.892311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.892530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.892556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.892722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.892752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.892949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.892979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.893131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.893161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.893350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.893376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.893705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.893770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.893958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.893988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.894157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.894186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.522 qpair failed and we were unable to recover it. 00:34:34.522 [2024-07-15 20:40:12.894401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.522 [2024-07-15 20:40:12.894428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.894773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.894831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.895031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.895061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.895229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.895259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.895456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.895482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.895651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.895682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.895868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.895905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.896102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.896133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.896270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.896297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.896519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.896570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.896751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.896780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.896982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.897009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.897208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.897234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.897456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.897486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.897683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.897712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.897924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.897954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.898178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.898204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.898399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.898428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.898642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.898669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.898832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.898861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.899066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.899092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.899325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.899354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.899547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.899578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.899793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.899822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.900016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.900043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.900214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.900243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.900433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.900463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.900659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.900700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.900895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.900922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.901132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.901159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.901311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.901338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.901515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.901542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.901720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.901746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.901904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.901934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.902150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.902179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.902406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.902433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.523 qpair failed and we were unable to recover it. 00:34:34.523 [2024-07-15 20:40:12.902602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.523 [2024-07-15 20:40:12.902629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.902775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.902801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.902999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.903029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.903221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.903250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.903412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.903438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.903613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.903675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.903892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.903922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.904137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.904166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.904353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.904379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.904548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.904578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.904731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.904760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.904950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.904979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.905176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.905203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.905466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.905521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.905687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.905728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.905895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.905925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.906121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.906148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.906319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.906346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.906547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.906578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.906733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.906762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.906960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.906987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.907151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.907182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.907374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.907403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.907617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.907646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.907870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.907904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.908130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.908160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.908342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.908371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.908588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.908618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.908844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.908871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.909081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.909111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.909297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.909328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.909545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.909575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.909788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.909817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.910021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.910048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.910244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.910273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.910459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.910488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.910690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.910716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.910895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.910939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.911104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.911131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.911329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.911362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.911562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.911589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.911760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.524 [2024-07-15 20:40:12.911786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.524 qpair failed and we were unable to recover it. 00:34:34.524 [2024-07-15 20:40:12.912006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.912036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.912228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.912258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.912428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.912455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.912636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.912665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.912895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.912922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.913088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.913115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.913288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.913314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.913605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.913664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.913861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.913899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.914083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.914113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.914307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.914335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.914566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.914618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.914813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.914842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.915078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.915105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.915266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.915293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.915469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.915506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.915678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.915707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.915924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.915954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.916122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.916149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.916345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.916374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.916561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.916590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.916754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.916784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.916980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.917006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.917156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.917182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.917366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.917396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.917647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.917674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.917852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.917884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.918062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.918092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.918280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.918308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.918499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.918525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.918701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.918727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.919003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.919033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.919200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.919229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.919418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.919447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.919665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.919691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.919930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.919960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.920128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.920157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.920320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.920349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.920541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.920568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.920737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.920768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.920958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.525 [2024-07-15 20:40:12.920989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.525 qpair failed and we were unable to recover it. 00:34:34.525 [2024-07-15 20:40:12.921158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.921188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.921410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.921437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.921749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.921798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.922024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.922054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.922218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.922248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.922418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.922444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.922615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.922642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.922818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.922849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.923022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.923050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.923209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.923236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.923501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.923534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.923763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.923792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.923958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.923988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.924159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.924185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.924376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.924406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.924554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.924583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.924775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.924804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.924999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.925027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.925194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.925223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.925409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.925438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.925627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.925656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.925840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.925867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.926037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.926067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.926281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.926310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.926503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.926533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.926695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.926722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.926889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.926920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.927103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.927133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.927322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.927351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.927536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.927563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.927747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.927776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.927970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.928000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.928216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.928245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.928439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.928466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.928740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.928791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.526 [2024-07-15 20:40:12.928996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.526 [2024-07-15 20:40:12.929026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.526 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.929213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.929242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.929440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.929467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.929671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.929724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.929927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.929955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.930170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.930199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.930387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.930414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.930575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.930605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.930790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.930819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.931028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.931056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.931228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.931255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.931570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.931623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.931818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.931847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.932053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.932081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.932247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.932274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.932468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.932497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.932656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.932686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.932902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.932930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.933127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.933153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.933378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.933407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.933630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.933657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.933857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.933891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.934102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.934129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.934326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.934355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.934520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.934549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.934740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.934795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.934990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.935018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.935210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.935239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.935430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.935459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.935659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.935688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.935910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.935938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.936108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.936137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.936325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.936353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.936575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.936604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.936768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.936794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.936981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.937011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.937205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.937234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.937446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.937475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.937638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.937664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.937812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.937838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.938017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.938046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.527 [2024-07-15 20:40:12.938249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.527 [2024-07-15 20:40:12.938278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.527 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.938474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.938500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.938701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.938734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.938954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.938984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.939163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.939192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.939361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.939387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.939539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.939566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.939742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.939768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.939944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.939975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.940134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.940161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.940348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.940377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.940562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.940592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.940761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.940790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.940983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.941010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.941188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.941215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.941381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.941407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.941582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.941608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.941784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.941810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.942016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.942046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.942208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.942239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.942434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.942463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.942655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.942681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.942872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.942909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.943100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.943131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.943352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.943381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.943572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.943598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.943789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.943817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.944038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.944068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.944227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.944256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.944470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.944500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.944655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.944682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.944833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.944859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.945049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.945076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.945275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.945302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.945621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.945671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.945893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.528 [2024-07-15 20:40:12.945923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.528 qpair failed and we were unable to recover it. 00:34:34.528 [2024-07-15 20:40:12.946135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.946165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.946365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.946392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.946657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.946709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.946900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.946930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.947119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.947149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.947338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.947365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.947577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.947629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.947810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.947841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.948049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.948087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.948274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.948301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.948623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.948682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.948871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.948915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.949106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.949136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.949356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.949383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.949596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.949648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.949866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.949905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.950125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.950154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.950351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.950387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.950591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.950632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.950794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.950823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.951018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.951045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.951247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.951273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.951513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.951570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.951764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.951793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.952017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.952047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.952218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.952245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.952546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.952600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.952816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.952845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.953094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.953120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.953300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.953326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.953544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.953595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.953786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.953815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.954006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.954036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.954230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.954257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.954569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.954633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.954823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.954853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.955058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.955084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.955283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.955310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.955679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.955735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.955952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.955981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.956169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.956199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.529 qpair failed and we were unable to recover it. 00:34:34.529 [2024-07-15 20:40:12.956420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.529 [2024-07-15 20:40:12.956446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.956756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.956817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.957010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.957040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.957231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.957260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.957449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.957475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.957738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.957789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.958006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.958036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.958241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.958270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.958462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.958488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.958656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.958687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.958904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.958934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.959130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.959160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.959344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.959371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.959525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.959553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.959764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.959792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.959976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.960004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.960203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.960229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.960424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.960453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.960675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.960701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.960852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.960895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.961067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.961099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.961291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.961320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.961540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.961569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.961757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.961787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.961971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.961998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.962153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.962180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.962361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.962397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.962562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.962591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.962782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.962809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.962997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.963027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.963207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.963236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.963394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.963424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.963620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.963646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.963864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.963902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.964069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.964098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.964307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.964333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.964537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.964564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.964796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.964825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.964994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.965024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.965236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.965265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.965428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.965454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.530 [2024-07-15 20:40:12.965607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.530 [2024-07-15 20:40:12.965635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.530 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.965856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.965894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.966111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.966140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.966329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.966355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.966545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.966575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.966758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.966787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.966975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.967009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.967199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.967226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.967558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.967614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.967809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.967838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.968062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.968090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.968239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.968266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.968423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.968450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.968624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.968654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.968851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.968889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.969089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.969116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.969306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.969379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.969601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.969628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.969794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.969823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.970043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.970070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.970262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.970291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.970476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.970505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.970676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.970706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.970936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.970963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.971169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.971198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.971393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.971422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.971614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.971643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.971805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.971831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.972033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.972063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.972225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.972254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.972446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.972476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.972669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.972695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.972864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.972900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.973111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.973144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.973331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.973360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.973573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.973599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.973799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.973828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.974036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.974064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.974257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.974287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.974481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.974508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.974781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.974835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.975029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.531 [2024-07-15 20:40:12.975059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.531 qpair failed and we were unable to recover it. 00:34:34.531 [2024-07-15 20:40:12.975259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.975289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.975487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.975513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.975737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.975796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.975988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.976018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.976210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.976240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.976465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.976491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.976731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.976782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.976975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.977005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.977234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.977261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.977434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.977460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.977652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.977681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.977857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.977899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.978096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.978125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.978314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.978340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.978565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.978617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.978808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.978837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.979037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.979064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.979241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.979267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.979443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.979469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.979668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.979697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.979889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.979919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.980113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.980140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.980416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.980466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.980679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.980708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.980882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.980913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.981134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.981160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.981441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.981494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.981681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.981710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.981907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.981937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.982159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.982186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.982464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.982516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.982732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.982761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.982940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.982972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.983162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.983188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.983409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.983471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.532 [2024-07-15 20:40:12.983655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.532 [2024-07-15 20:40:12.983685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.532 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.983866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.983904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.984076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.984103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.984292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.984322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.984513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.984543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.984734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.984764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.984961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.984988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.985161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.985190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.985343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.985373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.985560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.985590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.985812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.985839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.986044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.986074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.986226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.986255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.986446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.986477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.986645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.986672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.986815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.986841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.987042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.987069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.987266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.987295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.987490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.987516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.987711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.987740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.987925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.987955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.988145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.988174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.988349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.988376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.988595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.988624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.988823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.988856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.989022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.989052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.989241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.989267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.989490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.989552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.989734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.989764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.989952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.989983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.990183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.990210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.990562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.990628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.990841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.990870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.991061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.991091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.991314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.991340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.991488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.991515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.533 [2024-07-15 20:40:12.991664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.533 [2024-07-15 20:40:12.991690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.533 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.991871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.991905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.992085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.992114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.992287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.992311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.992510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.992542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.992741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.992771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.992937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.992963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.993103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.993128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.993361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.993389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.993611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.993639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.993818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.993843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.994004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.994030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.994220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.994251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.994486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.994511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.994659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.994684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.994857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.994898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.995128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.995156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.995349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.995389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.995590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.995615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.995807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.995835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.996026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.996052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.996251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.996279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.996434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.996459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.996678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.996706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.996900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.996933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.997124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.997152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.997351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.997377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.997523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.997548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.997723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.997748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.997915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.997942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.998098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.998124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.998303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.998328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.998524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.998549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.998691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.998717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.998891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.998926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.999057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.999082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.999251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.999277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.999446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.999472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.999661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.999686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:12.999852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:12.999885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:13.000050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:13.000075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:13.000292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.534 [2024-07-15 20:40:13.000317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.534 qpair failed and we were unable to recover it. 00:34:34.534 [2024-07-15 20:40:13.000467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.000492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.000726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.000754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.000956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.000982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.001148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.001190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.001359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.001384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.001564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.001592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.001762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.001789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.001965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.001992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.002189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.002215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.002444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.002473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.002633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.002662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.002850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.002887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.003069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.003094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.003309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.003337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.003563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.003591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.003815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.003843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.004023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.004049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.004244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.004271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.004433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.004461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.004641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.004669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.004884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.004912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.005069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.005094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.005324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.005352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.005544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.005572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.005735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.005763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.005970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.005997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.006142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.006166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.006324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.006349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.006541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.006569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.006781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.006809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.006990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.007016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.007193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.007222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.007415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.007440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.007612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.007639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.007817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.007845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.008058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.008084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.008258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.008286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.008439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.008526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.008689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.008731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.008953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.008979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.009145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.009171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.535 [2024-07-15 20:40:13.009465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.535 [2024-07-15 20:40:13.009522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.535 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.009744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.009772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.009974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.010000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.010205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.010230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.010422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.010449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.010603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.010631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.010858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.010893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.011087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.011112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.011301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.011329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.011506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.011533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.011722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.011749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.011945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.011971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.012108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.012133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.012361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.012402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.012568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.012597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.012852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.012889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.013085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.013110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.013307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.013334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.536 [2024-07-15 20:40:13.013488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.536 [2024-07-15 20:40:13.013515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.536 qpair failed and we were unable to recover it. 00:34:34.821 [2024-07-15 20:40:13.013728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.821 [2024-07-15 20:40:13.013756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.821 qpair failed and we were unable to recover it. 00:34:34.821 [2024-07-15 20:40:13.013918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.821 [2024-07-15 20:40:13.013944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.821 qpair failed and we were unable to recover it. 00:34:34.821 [2024-07-15 20:40:13.014108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.821 [2024-07-15 20:40:13.014134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.821 qpair failed and we were unable to recover it. 00:34:34.821 [2024-07-15 20:40:13.014370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.821 [2024-07-15 20:40:13.014411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.821 qpair failed and we were unable to recover it. 00:34:34.821 [2024-07-15 20:40:13.014617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.821 [2024-07-15 20:40:13.014645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.821 qpair failed and we were unable to recover it. 00:34:34.821 [2024-07-15 20:40:13.014860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.014897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.015062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.015087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.015275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.015303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.015464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.015496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.015711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.015739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.015928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.015954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.016131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.016172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.016378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.016407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.016589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.016617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.016802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.016829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.017000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.017026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.017217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.017245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.017401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.017425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.017570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.017611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.017799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.017826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.018036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.018062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.018236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.018261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.018428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.018453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.018627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.018655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.018841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.018869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.019082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.019107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.019268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.019296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.019477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.019505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.019696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.019724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.019900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.019933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.020082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.020107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.020247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.020272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.020494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.020554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.020769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.020797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.020997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.822 [2024-07-15 20:40:13.021023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.822 qpair failed and we were unable to recover it. 00:34:34.822 [2024-07-15 20:40:13.021174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.021206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.021401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.021429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.021607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.021634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.021791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.021819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.022001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.022026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.022224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.022250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.022427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.022451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.022598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.022622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.022790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.022832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.023055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.023082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.023230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.023255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.023420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.023445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.023621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.023648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.023882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.023908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.024114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.024139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.024360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.024388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.024584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.024612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.024799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.024827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.024993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.025018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.025194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.025219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.025365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.025390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.025566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.025591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.025764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.025789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.025960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.025989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.026178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.026205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.026420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.026448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.026671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.026696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.026921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.026949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.027119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.027147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.027310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.027338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.027553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.027578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.027753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.823 [2024-07-15 20:40:13.027781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.823 qpair failed and we were unable to recover it. 00:34:34.823 [2024-07-15 20:40:13.027936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.027964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.028176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.028204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.028399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.028425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.028644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.028671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.028868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.028903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.029094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.029123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.029289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.029314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.029532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.029560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.029752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.029779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.029951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.029985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.030217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.030243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.030476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.030501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.030672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.030697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.030940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.030968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.031138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.031163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.031366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.031391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.031621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.031648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.031818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.031847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.032043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.032069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.032288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.032315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.032509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.032537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.032733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.032761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.032927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.032953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.033094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.033138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.033359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.033384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.033582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.033607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.033851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.033883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.034028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.034053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.034217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.034245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.034405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.824 [2024-07-15 20:40:13.034433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.824 qpair failed and we were unable to recover it. 00:34:34.824 [2024-07-15 20:40:13.034652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.034677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.034845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.034872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.035096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.035121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.035317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.035346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.035507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.035532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.035751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.035779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.035985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.036018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.036242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.036271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.036492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.036517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.036682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.036710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.036867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.036902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.037094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.037122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.037283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.037309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.037494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.037522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.037701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.037729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.037900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.037939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.038155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.038180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.038364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.038389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.038572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.038600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.038789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.038818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.039046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.039072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.039243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.039271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.039450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.039478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.039669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.039697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.039858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.039889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.040094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.040122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.040288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.040317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.040542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.040568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.040734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.040759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.040960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.040989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.041188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.041213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.825 [2024-07-15 20:40:13.041385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.825 [2024-07-15 20:40:13.041427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.825 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.041617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.041641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.041829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.041862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.042063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.042088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.042246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.042272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.042439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.042464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.042683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.042710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.042900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.042938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.043127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.043155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.043369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.043394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.043591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.043618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.043786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.043813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.043980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.044006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.044172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.044197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.044390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.044417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.044606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.044634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.044822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.044850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.045048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.045073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.045285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.045313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.045537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.045564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.045749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.045777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.045947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.045973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.046177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.046205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.046368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.046395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.046583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.046612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.046794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.046819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.046995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.047021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.047164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.047189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.047390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.047418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.047612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.047637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.047829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.826 [2024-07-15 20:40:13.047857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.826 qpair failed and we were unable to recover it. 00:34:34.826 [2024-07-15 20:40:13.048021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.048048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.048241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.048270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.048431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.048456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.048676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.048704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.048864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.048897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.049093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.049123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.049311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.049335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.049527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.049555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.049768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.049796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.049953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.049982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.050172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.050197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.050386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.050415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.050638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.050666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.050858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.050892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.051086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.051111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.051333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.051361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.051531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.051558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.051775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.051803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.052031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.052057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.827 [2024-07-15 20:40:13.052198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.827 [2024-07-15 20:40:13.052238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.827 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.052419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.052447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.052639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.052669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.052890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.052916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.053079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.053106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.053288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.053316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.053503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.053531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.053695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.053720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.053916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.053945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.054105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.054133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.054292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.054320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.054539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.054564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.054753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.054781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.055006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.055032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.055254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.055282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.055448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.055472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.055660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.055687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.055901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.055930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.056114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.056142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.056333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.056357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.056551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.056584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.056748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.056776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.056944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.056973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.057172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.057199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.057333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.057358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.057549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.057577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.057790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.057816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.058011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.058036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.058227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.058255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.058440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.058468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.058686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.058714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.058882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.058908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.059084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.059109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.059281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.059309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.059526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.059555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.059752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.828 [2024-07-15 20:40:13.059776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.828 qpair failed and we were unable to recover it. 00:34:34.828 [2024-07-15 20:40:13.059941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.059967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.060159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.060187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.060342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.060370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.060589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.060614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.060811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.060839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.061039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.061065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.061247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.061272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.061438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.061463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.061649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.061676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.061835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.061863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.062056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.062081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.062255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.062284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.062468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.062496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.062658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.062685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.062885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.062913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.063101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.063126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.063323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.063351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.063545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.063570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.063743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.063769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.063939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.063964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.064105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.064130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.064282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.064307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.064477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.064503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.064643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.064668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.064819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.064847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.065051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.065076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.065299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.065328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.065522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.065548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.065746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.065773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.065972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.065998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.066170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.066196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.066336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.066361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.066544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.066572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.066725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.066753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.066932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.066961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.067182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.067207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.067431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.067459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.067648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.829 [2024-07-15 20:40:13.067677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.829 qpair failed and we were unable to recover it. 00:34:34.829 [2024-07-15 20:40:13.067897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.067931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.068102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.068129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.068305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.068330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.068525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.068550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.068750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.068778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.068999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.069024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.069195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.069225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.069416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.069444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.069639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.069667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.069865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.069908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.070111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.070153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.070346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.070374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.070541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.070569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.070762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.070787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.070977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.071006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.071195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.071223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.071383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.071412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.071572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.071598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.071775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.071800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.071970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.071996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.072170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.072195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.072390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.072416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.072565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.072589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.072763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.072787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.072983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.073011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.073216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.073241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.073448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.073473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.073645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.073670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.073815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.073840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.073995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.074022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.074169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.074194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.830 qpair failed and we were unable to recover it. 00:34:34.830 [2024-07-15 20:40:13.074363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.830 [2024-07-15 20:40:13.074388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.074528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.074553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.074751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.074776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.074948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.074973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.075147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.075172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.075343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.075368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.075520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.075546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.075740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.075765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.075925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.075951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.076092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.076117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.076316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.076345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.076510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.076535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.076708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.076733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.076894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.076920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.077071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.077097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.077296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.077321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.077498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.077522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.077693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.077718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.077859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.077899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.078077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.078102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.078268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.078293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.078435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.078462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.078638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.078663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.078815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.078840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.078993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.079019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.079220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.079245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.079419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.079445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.079615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.079640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.079809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.079834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.080010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.080035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.080177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.080203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.080352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.831 [2024-07-15 20:40:13.080376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.831 qpair failed and we were unable to recover it. 00:34:34.831 [2024-07-15 20:40:13.080514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.080539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.080714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.080739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.080904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.080930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.081131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.081156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.081328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.081353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.081523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.081552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.081700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.081725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.081901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.081928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.082125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.082150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.082321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.082346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.082523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.082548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.082711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.082736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.082899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.082925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.083100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.083125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.083300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.083326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.083498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.083522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.083684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.083709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.083854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.083885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.084034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.084059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.084274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.084299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.084499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.084524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.084690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.084715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.084890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.084916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.085082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.085107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.085313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.085337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.085482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.085507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.085654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.832 [2024-07-15 20:40:13.085680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.832 qpair failed and we were unable to recover it. 00:34:34.832 [2024-07-15 20:40:13.085827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.085852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.086038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.086064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.086262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.086287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.086464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.086489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.086658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.086683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.086887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.086917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.087088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.087113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.087286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.087311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.087516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.087541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.087737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.087763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.087932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.087958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.088134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.088159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.088324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.088349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.088491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.088516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.088699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.088723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.088900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.088926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.089080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.089105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.089288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.089313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.089489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.089514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.089720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.089746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.089942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.089968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.090140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.090165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.833 [2024-07-15 20:40:13.090347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.833 [2024-07-15 20:40:13.090372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.833 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.090548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.090573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.090724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.090749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.090888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.090914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.091087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.091112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.091321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.091346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.091527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.091552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.091721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.091746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.091889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.091914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.092058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.092083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.092249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.092275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.092455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.092480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.092662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.092687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.092861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.092892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.093068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.093093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.093275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.093300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.093470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.093495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.093669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.093694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.093861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.093903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.094051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.094075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.094258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.094283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.094449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.094474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.094646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.094671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.094847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.094872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.095088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.095114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.095285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.095311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.095503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.095528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.095670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.095695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.095866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.095898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.096077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.096103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.096272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.096297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.096459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.096484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.096678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.096703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.096907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.096933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.097102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.097127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.834 qpair failed and we were unable to recover it. 00:34:34.834 [2024-07-15 20:40:13.097300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.834 [2024-07-15 20:40:13.097325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.097486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.097511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.097712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.097738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.097989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.098014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.098217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.098243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.098490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.098515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.098718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.098743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.098916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.098942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.099116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.099141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.099313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.099338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.099509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.099535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.099735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.099759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.099907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.099934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.100190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.100215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.100389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.100414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.100611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.100636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.100810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.100840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.100990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.101016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.101192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.101218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.101388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.101413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.101611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.101636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.101784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.101809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.102011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.102037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.102208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.102233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.102372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.102397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.102570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.102594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.102774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.102799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.102944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.102969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.103122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.103147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.103315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.103340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.103526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.103551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.103721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.103746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.103912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.103944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.835 [2024-07-15 20:40:13.104128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.835 [2024-07-15 20:40:13.104153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.835 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.104307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.104334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.104515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.104541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.104710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.104735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.104905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.104940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.105109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.105135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.105305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.105329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.105500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.105525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.105692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.105717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.105917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.105943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.106094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.106123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.106295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.106320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.106487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.106512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.106710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.106735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.106921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.106946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.107111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.107136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.107310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.107335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.107514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.107539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.107708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.107733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.107938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.107964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.108119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.108144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.108396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.108421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.108595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.108619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.108785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.108810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.108993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.109019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.109198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.109224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.109398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.109423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.109565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.109590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.109769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.109794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.109946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.109972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.110126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.110153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.110346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.110371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.110563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.110588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.836 qpair failed and we were unable to recover it. 00:34:34.836 [2024-07-15 20:40:13.110755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.836 [2024-07-15 20:40:13.110781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.110960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.110986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.111182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.111207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.111370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.111395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.111565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.111596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.111768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.111793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.111944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.111970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.112146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.112170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.112343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.112368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.112538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.112563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.112759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.112783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.112957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.112983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.113163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.113188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.113332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.113357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.113554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.113579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.113777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.113802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.113954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.113980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.114141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.114166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.114337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.114362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.114527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.114552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.114753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.114778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.114973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.114999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.115168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.115193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.115361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.115386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.115553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.115579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.115755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.115780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.115979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.116013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.116159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.116185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.116387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.116412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.116587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.116612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.116783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.116808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.116985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.837 [2024-07-15 20:40:13.117022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.837 qpair failed and we were unable to recover it. 00:34:34.837 [2024-07-15 20:40:13.117181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.117206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.117387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.117412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.117594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.117619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.117794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.117819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.117967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.117993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.118166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.118191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.118365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.118390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.118563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.118588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.118761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.118786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.118957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.118982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.119152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.119177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.119354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.119379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.119554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.119579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.119752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.119781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.119958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.119984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.120161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.120186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.120329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.120354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.120523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.120547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.120729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.120754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.120948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.120974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.121155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.121180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.121332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.121357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.838 [2024-07-15 20:40:13.121560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.838 [2024-07-15 20:40:13.121585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.838 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.121758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.121783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.121990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.122016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.122186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.122211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.122358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.122383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.122522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.122547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.122757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.122782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.122977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.123003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.123167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.123191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.123386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.123410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.123553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.123578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.123752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.123777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.123969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.123994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.124163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.124187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.124364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.124389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.124596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.124622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.124771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.124796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.124962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.124988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.125158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.125196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.125373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.125399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.125571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.125596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.125791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.125816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.125984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.126010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.126151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.126177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.126373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.126398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.126596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.126621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.126771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.126797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.126978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.127004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.839 [2024-07-15 20:40:13.127168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.839 [2024-07-15 20:40:13.127192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.839 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.127340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.127365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.127506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.127532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.127676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.127701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.127886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.127912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.128159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.128185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.128384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.128409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.128551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.128576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.128740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.128765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.128901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.128927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.129101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.129127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.129323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.129348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.129594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.129618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.129814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.129839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.130100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.130126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.130274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.130299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.130449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.130476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.130647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.130677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.130854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.130886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.131057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.131082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.131253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.131280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.131453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.131478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.131630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.131655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.131799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.131824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.132073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.132099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.132246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.132271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.132454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.132479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.132647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.132672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.132833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.132860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.133082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.840 [2024-07-15 20:40:13.133107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.840 qpair failed and we were unable to recover it. 00:34:34.840 [2024-07-15 20:40:13.133264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.133289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.133467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.133492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.133663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.133688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.133935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.133961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.134141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.134167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.134364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.134389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.134565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.134590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.134785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.134810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.134979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.135005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.135152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.135177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.135323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.135348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.135516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.135541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.135716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.135741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.135891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.135917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.136063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.136088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.136271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.136296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.136466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.136491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.136643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.136669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.136841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.136866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.137039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.137065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.137240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.137265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.137401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.137426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.137625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.137650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.137787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.137812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.137992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.138019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.138171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.138196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.138358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.138383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.138550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.138575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.138749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.138774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.138950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.138976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.139174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.139199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.139342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.139367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.139544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.139569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.139817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.139842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.140023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.140049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.140223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.140249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.140419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.140445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.140643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.140669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.140834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.140862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.141071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.141099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.141315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.141344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.141502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.141527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.141701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.141729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.141937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.141966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.142175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.142203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.142456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.841 [2024-07-15 20:40:13.142514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.841 qpair failed and we were unable to recover it. 00:34:34.841 [2024-07-15 20:40:13.142679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.142703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.142886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.142912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.143054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.143079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.143286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.143311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.143484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.143509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.143680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.143705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.143959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.143985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.144189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.144214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.144415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.144440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.144588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.144617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.144795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.144821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.144963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.144989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.145162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.145187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.145362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.145387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.145585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.145613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.145789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.145817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.145989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.146015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.146214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.146238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.146421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.146447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.146612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.146638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.146803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.146831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.147027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.147053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.147257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.147283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.147436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.147461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.147606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.147631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.147797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.147822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.148022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.148049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.148219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.148245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.148392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.148417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.148590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.148615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.148806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.148835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.149033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.149060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.149212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.149237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.149384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.149409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.149611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.149636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.149812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.149837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.150020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.150050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.150196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.150221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.150373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.150399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.150562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.150587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.150750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.150778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.150962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.150989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.151160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.151185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.151351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.151376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.151529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.151554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.151725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.151766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.151925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.151952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.152124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.152149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.152348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.152373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.152564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.152592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.152809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.152837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.153094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.153122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.153302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.153330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.842 qpair failed and we were unable to recover it. 00:34:34.842 [2024-07-15 20:40:13.153538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.842 [2024-07-15 20:40:13.153566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.153806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.153834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.154032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.154060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.154277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.154305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.154512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.154540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.154761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.154790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.154996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.155025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.155229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.155256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.155447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.155475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.155765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.155816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.156040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.156069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.156282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.156310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.156507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.156535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.156777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.156805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.157019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.157047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.157257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.157285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.157518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.157546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.157744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.157772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.158008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.158037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.158261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.158288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.158515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.158542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.158789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.158817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.159012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.159042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.159212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.159238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.159416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.159443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.159647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.159672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.159847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.159882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.160048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.160073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.160259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.160284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.160434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.160460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.160637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.160663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.160836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.160861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.161050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.161075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.161247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.161272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.161439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.161464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.161630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.161655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.161821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.161849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.162085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.843 [2024-07-15 20:40:13.162111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.843 qpair failed and we were unable to recover it. 00:34:34.843 [2024-07-15 20:40:13.162259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.162285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.162460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.162485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.162681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.162706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.162886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.162912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.163080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.163105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.163270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.163295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.163469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.163493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.163714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.163742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.163952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.163979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.164170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.164194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.164367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.164392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.164572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.164597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.164791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.164816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.164962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.164991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.165132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.165158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.165354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.165379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.165514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.165540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.165714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.165739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.165909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.165935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.166135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.166161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.166309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.166334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.166471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.166497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.166643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.166668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.166811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.166835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.167010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.167036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.167190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.167215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.167357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.167382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.167557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.167583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.167721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.167746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.167910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.167937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.168086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.168111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.168263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.168288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.168463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.168488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.168659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.168683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.168852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.168885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.169049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.169074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.169245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.169270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.169441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.169466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.169615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.169640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.169806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.169834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.170042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.170072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.170222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.170247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.170411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.170436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.170630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.170655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.170833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.170858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.171032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.171058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.171219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.171244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.171388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.171413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.171610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.171635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.171835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.171863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.172039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.172064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.172208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.172233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.172425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.172451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.172650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.172674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.172841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.172869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.173043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.844 [2024-07-15 20:40:13.173069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.844 qpair failed and we were unable to recover it. 00:34:34.844 [2024-07-15 20:40:13.173256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.173281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.173425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.173450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.173614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.173639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.173829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.173857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.174061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.174087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.174255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.174279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.174427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.174452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.174647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.174672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.174817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.174842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.175018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.175044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.175191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.175217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.175392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.175421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.175596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.175621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.175827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.175855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.176079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.176104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.176245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.176270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.176441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.176466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.176658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.176684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.176828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.176853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.177032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.177057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.177196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.177221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.177395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.177421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.177571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.177596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.177789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.177817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.178016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.178043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.178188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.178213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.178388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.178414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.178590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.178614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.178777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.178805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.178996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.179022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.179217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.179242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.179380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.179405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.179576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.179602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.179751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.179776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.179953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.179980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.180161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.180186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.180369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.180395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.180564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.180589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.180774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.180802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.181032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.181061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.181270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.181335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.181545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.181573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.181768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.181796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.182032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.182060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.182250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.182278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.182468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.182495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.182702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.182730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.182892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.182935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.183155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.183181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.183349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.183374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.183587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.183615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.183828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.183856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.184088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.184116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.184419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.184488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.184723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.184751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.184958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.184983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.185147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.845 [2024-07-15 20:40:13.185173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.845 qpair failed and we were unable to recover it. 00:34:34.845 [2024-07-15 20:40:13.185359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.185384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.185546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.185572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.185740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.185765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.185936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.185963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.186118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.186144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.186315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.186340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.186480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.186505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.186654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.186680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.186891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.186917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.187074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.187100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.187269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.187294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.187441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.187466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.187612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.187637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.187810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.187835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.188014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.188040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.188176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.188201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.188386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.188411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.188551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.188576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.188743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.188768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.188940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.188967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.189165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.189190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.189351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.189379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.189589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.189621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.189841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.189869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.190121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.190149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.190385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.190412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.190655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.190682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.190898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.190924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.191068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.191093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.191240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.191265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.191401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.191426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.191622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.191647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.191818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.191842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.191984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.192009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.192160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.192185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.192359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.192384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.192561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.192586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.192778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.192806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.192992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.193017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.193216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.193242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.193416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.193441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.846 [2024-07-15 20:40:13.193628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.846 [2024-07-15 20:40:13.193653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.846 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.193860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.193891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.194073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.194098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.194235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.194260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.194442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.194467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.194638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.194663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.194837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.194863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.195026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.195052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.195228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.195259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.195405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.195430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.195611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.195637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.195808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.195833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.195998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.196023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.196219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.196244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.196418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.196443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.196637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.196662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.196796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.196821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.196969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.196996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.197166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.197191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.197356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.197383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.197571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.197600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.197786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.197815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.197994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.198020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.198172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.198198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.198342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.198367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.198591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.198618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.198901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.198943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.199114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.199141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.199323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.199351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.199511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.199540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.199761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.199787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.199991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.200017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.200201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.200229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.200398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.200426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.200591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.200616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.200805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.200833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.201014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.201039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.201206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.201234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.201422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.201447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.201666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.201693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.201873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.201908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.202102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.202131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.202314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.202339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.202529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.202557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.202712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.202740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.202940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.202967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.203138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.203163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.203342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.203368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.203550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.203578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.203755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.203784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.203972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.203997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.204188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.204216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.204405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.204430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.204601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.204626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.204825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.204850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.204994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.205020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.205200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.205228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.205390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.205417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.847 qpair failed and we were unable to recover it. 00:34:34.847 [2024-07-15 20:40:13.205607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.847 [2024-07-15 20:40:13.205635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.205836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.205863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.206069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.206095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.206260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.206288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.206476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.206503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.206698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.206724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.206916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.206958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.207127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.207152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.207354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.207382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.207578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.207603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.207795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.207822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.208029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.208056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.208282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.208311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.208507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.208532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.208708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.208733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.208925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.208969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.209146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.209172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.209309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.209334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.209567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.209621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.209789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.209816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.210019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.210046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.210239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.210264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.210542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.210569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.210778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.210806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.211013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.211039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.211214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.211239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.211390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.211415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.211611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.211636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.211839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.211868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.212073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.212098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.212290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.212319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.212519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.212548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.212738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.212766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.212928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.212953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.213123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.213148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.213343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.213371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.213559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.213588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.213803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.213831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.214057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.214083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.214227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.214252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.214461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.214503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.214689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.214713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.214883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.214911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.215096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.215121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.215319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.215347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.215571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.215600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.215793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.215820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.216027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.216053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.216258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.216286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.216483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.216508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.216711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.216736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.216948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.216977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.217210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.217235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.217404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.217429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.217697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.217752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.217949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.848 [2024-07-15 20:40:13.217977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.848 qpair failed and we were unable to recover it. 00:34:34.848 [2024-07-15 20:40:13.218140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.218169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.218354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.218379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.218579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.218604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.218776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.218801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.218998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.219024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.219196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.219222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.219389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.219414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.219556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.219581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.219753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.219778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.219958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.219984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.220166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.220191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.220336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.220361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.220528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.220553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.220723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.220749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.220948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.220974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.221146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.221171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.221347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.221376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.221552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.221577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.221719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.221745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.221931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.221957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.222091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.222116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.222291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.222316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.222521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.222547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.222740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.222765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.222961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.222987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.223127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.223152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.223326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.223351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.223502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.223527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.223723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.223748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.223926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.223952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.224149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.224175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.224318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.224343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.224541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.224566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.224738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.224763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.224965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.224991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.225126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.225151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.225321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.225347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.225550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.225575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.225747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.225772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.225940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.225966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.226107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.226132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.226302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.226328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.226524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.226549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.226716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.226741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.226909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.226935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.227100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.227126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.227303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.227328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.227524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.227549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.227741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.227768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.227968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.227994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.228243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.849 [2024-07-15 20:40:13.228268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.849 qpair failed and we were unable to recover it. 00:34:34.849 [2024-07-15 20:40:13.228439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.228464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.228640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.228665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.228868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.228930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.229103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.229129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.229277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.229302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.229473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.229498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.229675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.229701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.229882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.229908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.230106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.230131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.230306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.230331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.230469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.230495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.230665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.230690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.230856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.230888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.231088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.231113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.231309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.231335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.231508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.231533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.231677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.231702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.231933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.231959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.232114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.232140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.232300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.232325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.232474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.232499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.232647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.232672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.232842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.232867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.233073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.233098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.233295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.233320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.233489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.233514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.233689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.233714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.233887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.233930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.234114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.234139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.234388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.234413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.234612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.234638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.234890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.234915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.235115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.235140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.235313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.235342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.235483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.235509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.235679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.235704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.235903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.235929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.236082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.236106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.236279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.236305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.236472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.236497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.236661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.236686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.236819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.236844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.237035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.237062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.237281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.237306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.237448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.237473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.237620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.237645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.237839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.237865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.238077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.238102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.238247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.238272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.238443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.238468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.238639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.238665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.238814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.238840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.239015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.239040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.239214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.239240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.239379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.850 [2024-07-15 20:40:13.239405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.850 qpair failed and we were unable to recover it. 00:34:34.850 [2024-07-15 20:40:13.239570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.239595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.239766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.239794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.239962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.239988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.240160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.240186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.240355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.240380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.240546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.240575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.240743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.240768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.240912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.240938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.241139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.241164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.241339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.241364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.241559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.241585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.241782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.241807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.242001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.242027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.242175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.242200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.242394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.242419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.242595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.242620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.242758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.242783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.242979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.243005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.243199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.243225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.243414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.243440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.243643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.243668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.243831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.243856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.244012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.244037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.244214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.244239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.244436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.244461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.244632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.244657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.244853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.244894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.245076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.245101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.245275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.245300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.245497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.245523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.245719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.245743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.245918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.245944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.246129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.246153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.246303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.246329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.246473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.246498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.246669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.246694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.246865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.246897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.247095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.247119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.247312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.247337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.247511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.247536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.247708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.247733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.247907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.247933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.248078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.248103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.248318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.248344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.248488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.248512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.248689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.248714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.248893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.248919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.249070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.249095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.249283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.249308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.249457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.249482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.249657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.249682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.249827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.249853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.250028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.250053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.250222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.250247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.250422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.250447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.250646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.250671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.250859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.250894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.251078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.251103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.251277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.251302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.251476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.251501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.251738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.251790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.252028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.252057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.851 [2024-07-15 20:40:13.252249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.851 [2024-07-15 20:40:13.252274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.851 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.252472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.252497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.252654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.252681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.252871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.252913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.253167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.253192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.253393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.253419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.253588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.253613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.253786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.253812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.254060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.254086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.254264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.254289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.254454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.254479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.254651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.254680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.254846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.254870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.255081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.255107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.255278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.255303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.255471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.255496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.255688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.255713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.255850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.255881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.256061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.256086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.256253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.256278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.256471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.256496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.256711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.256774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.256972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.257006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.257182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.257207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.257382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.257407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.257588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.257614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.257761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.257788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.257981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.258007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.258149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.258174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.258357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.258383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.258558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.258583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.258832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.258857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.259008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.259033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.259188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.259214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.259414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.259438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.259580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.259605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.259803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.259828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.260080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.260106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.260275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.260304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.260507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.260531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.260701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.260726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.260874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.260909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.261157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.261182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.261356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.261381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.261530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.261556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.261772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.261800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.261991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.262017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.262189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.262214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.262378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.262406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.262675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.262724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.262916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.262943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.852 [2024-07-15 20:40:13.263142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.852 [2024-07-15 20:40:13.263168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.852 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.263337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.263362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.263508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.263533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.263688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.263713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.263885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.263910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.264059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.264084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.264256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.264282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.264453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.264480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.264656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.264681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.264910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.264936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.265112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.265139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.265309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.265335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.265526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.265551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.265700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.265724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.265924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.265954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.266123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.266148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.266324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.266348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.266522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.266547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.266736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.266764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.266955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.266980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.267134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.267159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.267337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.267362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.267534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.267559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.267783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.267810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.268013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.268039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.268213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.268238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.268437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.268462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.268633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.268657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.268831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.268856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.269032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.269058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.269255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.269284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.269681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.269740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.269951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.269977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.270181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.270205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.270350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.270376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.270543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.270567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.270735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.270760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.270933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.270958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.271133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.271158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.271309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.271334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.271482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.271507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.271660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.271685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.271872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.271905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.272076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.272101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.272277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.272302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.272499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.272527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.272707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.272736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.272943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.272971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.273155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.273180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.273425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.273450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.273699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.273724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.273871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.273902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.274074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.274099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.274268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.274293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.274470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.274496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.274672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.274697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.274895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.274920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.853 [2024-07-15 20:40:13.275066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.853 [2024-07-15 20:40:13.275091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.853 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.275265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.275291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.275448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.275473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.275672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.275697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.275865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.275907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.276109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.276135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.276305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.276330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.276469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.276495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.276646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.276672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.276850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.276885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.277036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.277061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.277232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.277257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.277437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.277462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.277617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.277642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.277862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.277899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.278062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.278086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.278233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.278258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.278435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.278461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.278656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.278681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.278854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.278885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.279060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.279085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.279249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.279274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.279440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.279465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.279613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.279639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.279832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.279860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.280068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.280099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.280298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.280323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.280520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.280548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.280764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.280792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.281034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.281064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.281320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.281371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.281610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.281638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.281847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.281875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.282104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.282133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.282371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.282399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.282606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.282633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.282869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.282905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.283098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.283123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.283276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.283302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.283502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.283527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.283674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.283699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.283867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.283910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.284053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.284078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.284254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.284279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.284457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.284482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.284682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.284707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.284882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.284908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.285079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.285104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.285267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.285293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.285439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.285464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.285631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.285656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.285844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.285872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.286078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.286107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.286252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.286277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.286423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.286448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.286621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.286645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.286796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.286821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.287049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.287091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.287294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.287319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.287468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.287493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.287633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.287657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.287852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.287887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.288103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.854 [2024-07-15 20:40:13.288128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.854 qpair failed and we were unable to recover it. 00:34:34.854 [2024-07-15 20:40:13.288306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.288331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.288505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.288532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.288707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.288736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.288956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.288982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.289122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.289149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.289319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.289345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.289516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.289541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.289685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.289710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.289859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.289890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.290139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.290164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.290360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.290385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.290553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.290577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.290713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.290737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.290918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.290944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.291116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.291141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.291309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.291334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.291503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.291528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.291696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.291721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.291872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.291911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.292109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.292134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.292274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.292299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.292469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.292494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.292671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.292697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.292873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.292907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.293079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.293104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.293251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.293278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.293456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.293482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.293663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.293688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.293838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.293863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.294041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.294066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.294267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.294293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.294541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.294566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.294763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.294791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.294986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.295014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.295170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.295196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.295347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.295372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.295514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.295538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.295704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.295732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.295919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.295946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.296194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.296220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.296411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.296436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.296583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.296608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.296814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.296840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.297023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.297049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.297253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.297278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.297451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.297476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.297649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.855 [2024-07-15 20:40:13.297675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.855 qpair failed and we were unable to recover it. 00:34:34.855 [2024-07-15 20:40:13.297870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.856 [2024-07-15 20:40:13.297903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.856 qpair failed and we were unable to recover it. 00:34:34.856 [2024-07-15 20:40:13.298079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.856 [2024-07-15 20:40:13.298104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.298277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.298302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.298478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.298504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.298641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.298666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.298913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.298939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.299145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.299170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.299355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.299381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.299552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.299578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.299729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.299753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.299903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.299933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.300079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.300105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.300250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.300275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.300420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.300445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.300590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.300615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.300752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.300777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.300945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.300971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.301169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.301195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.301348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.301373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.301517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.301542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.301673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.301698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.301867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.301900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.302045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.302070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.302253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.302279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.302450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.302475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.302672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.302697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.302872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.302904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.303075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.303100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.303244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.303269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.303411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.303436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.303569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.303594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.303766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.303791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.303963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.303989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.304130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.304155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.304324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.304349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.304519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.304545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.304711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.304736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.304891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.304934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.305081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.305106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.305302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.305327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.305496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.305521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.305692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.305722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.305945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.305971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.306224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.306249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.306428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.306454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.306605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.306630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.306797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.306822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.307013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.307039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.307179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.307205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.307405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.307430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.307674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.307699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.307882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.307908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.308081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.308106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.308255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.308280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.308448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.308473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.308636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.308662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.308864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.308896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.309067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.309092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.309257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.309282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.309448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.309473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.309644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.309669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.309816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.309841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.309989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.857 [2024-07-15 20:40:13.310014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.857 qpair failed and we were unable to recover it. 00:34:34.857 [2024-07-15 20:40:13.310189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.310214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.310431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.310463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.310677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.310704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.310868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.310917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.311122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.311148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.311318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.311343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.311515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.311542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.311711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.311737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.311913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.311940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.312081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.312107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.312281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.312305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.312481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.312506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.312675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.312700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.312863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.312896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.313066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.313091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.313253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.313278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.313423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.313448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.313598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.313623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.313776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.313801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.313999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.314026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.314201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.314227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.314401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.314426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.314598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.314623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.314769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.314794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.314972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.314998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.315166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.315192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.315364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.315389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.315561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.315586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.315768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.315793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.315981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.316007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.316172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.316198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.316365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.316390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.316538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.316563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.316748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.316773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.316919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.316945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.317120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.317145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.317313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.317339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.317513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.317538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.317685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.317710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.317893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.317918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.318056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.318081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.318233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.318258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.318458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.318483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.318676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.318703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.318899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.318925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.319072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.319099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.319303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.319328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.319501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.319526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.319695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.319720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.319901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.319927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.320080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.320105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.320288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.320313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.320465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.320490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.320652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.320677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.320845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.320870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.321021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.321046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.321226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.858 [2024-07-15 20:40:13.321252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.858 qpair failed and we were unable to recover it. 00:34:34.858 [2024-07-15 20:40:13.321423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.859 [2024-07-15 20:40:13.321448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.859 qpair failed and we were unable to recover it. 00:34:34.859 [2024-07-15 20:40:13.321646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.859 [2024-07-15 20:40:13.321671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.859 qpair failed and we were unable to recover it. 00:34:34.859 [2024-07-15 20:40:13.321881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.859 [2024-07-15 20:40:13.321924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.859 qpair failed and we were unable to recover it. 00:34:34.859 [2024-07-15 20:40:13.322106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.859 [2024-07-15 20:40:13.322132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.859 qpair failed and we were unable to recover it. 00:34:34.859 [2024-07-15 20:40:13.322303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.859 [2024-07-15 20:40:13.322328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.859 qpair failed and we were unable to recover it. 00:34:34.859 [2024-07-15 20:40:13.322476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.859 [2024-07-15 20:40:13.322501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.859 qpair failed and we were unable to recover it. 00:34:34.859 [2024-07-15 20:40:13.322673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.859 [2024-07-15 20:40:13.322698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.859 qpair failed and we were unable to recover it. 00:34:34.859 [2024-07-15 20:40:13.322850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.859 [2024-07-15 20:40:13.322875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.859 qpair failed and we were unable to recover it. 00:34:34.859 [2024-07-15 20:40:13.323081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.859 [2024-07-15 20:40:13.323107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.859 qpair failed and we were unable to recover it. 00:34:34.859 [2024-07-15 20:40:13.323242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.859 [2024-07-15 20:40:13.323267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.859 qpair failed and we were unable to recover it. 00:34:34.859 [2024-07-15 20:40:13.323462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.859 [2024-07-15 20:40:13.323487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.859 qpair failed and we were unable to recover it. 00:34:34.859 [2024-07-15 20:40:13.323663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.859 [2024-07-15 20:40:13.323688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.859 qpair failed and we were unable to recover it. 00:34:34.859 [2024-07-15 20:40:13.323891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.859 [2024-07-15 20:40:13.323920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:34.859 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-15 20:40:13.324068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-15 20:40:13.324094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-15 20:40:13.324279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-15 20:40:13.324304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-15 20:40:13.324457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-15 20:40:13.324482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-15 20:40:13.324625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-15 20:40:13.324650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-15 20:40:13.324821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-15 20:40:13.324846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-15 20:40:13.325059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-15 20:40:13.325085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-15 20:40:13.325232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.325258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.325428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.325453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.325625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.325667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.325845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.325872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.326040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.326065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.326240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.326264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.326406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.326431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.326577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.326602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.326770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.326796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.326963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.326989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.327163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.327188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.327360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.327385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.327652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.327706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.327906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.327931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.328149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.328177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.328383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.328411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.328624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.328652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.328827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.328855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.329102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.329131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.329341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.329369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.329583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.329616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.329849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.329885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.330077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.330105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.330314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.330342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.330627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.330676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.330903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.330945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.331124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.331149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.331294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.331318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.331518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.331543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.331718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.331743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.331918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.331943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.332118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.332143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.332342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.332368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.332545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.332570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.332748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.332773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.332974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.333000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.333152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.333177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.333343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.333368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.333537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.333562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.333750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-15 20:40:13.333778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-15 20:40:13.333965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.333991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.334137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.334163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.334356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.334381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.334549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.334574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.334789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.334818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.335013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.335038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.335209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.335234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.335378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.335403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.335572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.335598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.335734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.335759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.335935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.335960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.336135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.336161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.336325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.336351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.336535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.336559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.336729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.336754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.336926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.336952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.337153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.337179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.337322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.337347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.337522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.337551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.337695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.337720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.337895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.337921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.338094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.338119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.338300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.338325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.338500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.338525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.338677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.338704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.338885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.338928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.339081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.339106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.339304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.339332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.339523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.339548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.339686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.339733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.339925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.339966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.340125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.340150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.340352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.340376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.340568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.340596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.340790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.340819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.341062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.341088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.341256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.341281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.341465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.341492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.341682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.341709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-15 20:40:13.341923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-15 20:40:13.341952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.342148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.342173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.342323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.342351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.342509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.342536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.342735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.342762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.342957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.342983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.343133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.343157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.343349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.343377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.343540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.343568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.343758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.343787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.343973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.344002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.344209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.344234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.344421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.344449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.344614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.344638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.344832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.344859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.345046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.345070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.345257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.345285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.345478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.345503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.345670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.345695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.345887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.345916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.346117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.346146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.346336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.346361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.346549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.346577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.346746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.346774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.346967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.346996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.347154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.347178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.347317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.347342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.347483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.347510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.347687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.347715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.347889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.347914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.348089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.348114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.348271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.348299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.348491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.348519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.348704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.348732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.348927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.348953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.349133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.349175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.349401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.349433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.349629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.349654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.349839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.349867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.350038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-15 20:40:13.350063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-15 20:40:13.350238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.350268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.350427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.350452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.350669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.350696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.350883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.350912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.351098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.351124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.351296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.351322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.351528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.351556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.351770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.351797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.351984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.352009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.352183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.352208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.352367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.352393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.352575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.352603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.352802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.352830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.353031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.353057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.353248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.353276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.353444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.353472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.353689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.353715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.353886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.353912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.354082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.354107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.354265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.354292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.354487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.354515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.354678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.354703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.354925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.354954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.355167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.355200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.355396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.355424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.355584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.355609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.355749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.355791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.355970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.355996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.356141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.356182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.356396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.356421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.356570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.356595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.356771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.356796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.357039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.357065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.357239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.357265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.357458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.357485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.357649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.357676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.357865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.357904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.358074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.358100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.358289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.358317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-15 20:40:13.358466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-15 20:40:13.358494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.358656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.358684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.358875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.358908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.359086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.359111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.359275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.359300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.359496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.359524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.359721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.359746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.359937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.359966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.360150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.360178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.360329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.360356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.360569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.360594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.360756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.360784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.360982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.361020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.361188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.361216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.361410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.361435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.361599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.361628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.361793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.361821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.362002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.362028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.362222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.362247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.362412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.362441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.362631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.362659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.362844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.362872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.363073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.363098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.363311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.363339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.363542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.363570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.363809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.363852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.364098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.364126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.364303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.364329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.364507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.364533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.364684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.364710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.364891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.364918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.365116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-15 20:40:13.365143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-15 20:40:13.365318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.365344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.365516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.365541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.365919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.365947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.366120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.366146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.366312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.366338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.366505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.366531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.366710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.366736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.366940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.366966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.367139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.367165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.367341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.367368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.367543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.367569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.367743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.367769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.367913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.367939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.368111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.368137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.368314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.368340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.368490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.368516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.368728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.368756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.368948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.368974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.369150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.369175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.369356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.369381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.369560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.369585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.369729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.369754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.369940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.369968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.370147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.370172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.370345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.370370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.370541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.370566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.370733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.370759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.370920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.370946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.371114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.371140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.371278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.371304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.371485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.371512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.371678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.371703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.371858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.371891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.372054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.372085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.372288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.372314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.372484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.372510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.372679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.372704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-15 20:40:13.372838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-15 20:40:13.372863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.373053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.373080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.373251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.373278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.373449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.373475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.373672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.373698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.373871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.373905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.374080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.374106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.374273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.374300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.374469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.374494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.374699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.374727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.374927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.374965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.375144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.375169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.375341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.375366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.375562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.375587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.375733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.375759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.375907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.375934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.376137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.376162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.376342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.376368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.376523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.376551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.376706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.376731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.376890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.376918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.377100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.377127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.377272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.377297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.377494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.377519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.377738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.377765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.377955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.377982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.378153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.378179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.378376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.378405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.378645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.378673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.378867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.378899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.379057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.379082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.379278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.379306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.379489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.379519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.379743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.379772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.379997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-15 20:40:13.380024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-15 20:40:13.380170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.380197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.380410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.380443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.380814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.380889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.381093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.381120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.381312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.381342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.381528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.381561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.381767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.381802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.381970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.381999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.382211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.382239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.382423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.382450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.382667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.382693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.382892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.382918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.383128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.383156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.383310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.383351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.383690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.383749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.384088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.384133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.384376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.384404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.384624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.384653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.384847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.384873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.385058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.385084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.385253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.385282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.385461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.385489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.385729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.385758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.385958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.385985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.386181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.386209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.386419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.386448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.386684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.386712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.386946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.386976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.387201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.387243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.387457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.387485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.387715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.387741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.387935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.387964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.388132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.388157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.388350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-15 20:40:13.388379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-15 20:40:13.388634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.388663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.388851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.388894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.389075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.389100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.389298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.389326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.389593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.389642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.389846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.389872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.390054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.390096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.390329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.390362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.390690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.390741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.390920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.390946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.391145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.391173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.391382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.391410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.391767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.391812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.392038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.392064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.392236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.392264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.392476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.392504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.392660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.392686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.392856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.392889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.393068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.393093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.393290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.393318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.393612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.393641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.393861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.393893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.394042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.394067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.394259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.394287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.394644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.394703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.394929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.394955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.395150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.395178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.395367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.395395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.395740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.395789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.395981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.396007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.396155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.396198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.396420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.396447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.396776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.396835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.397068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.397094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.397255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.397280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.397425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.397452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.397626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.397651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.397842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-15 20:40:13.397867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-15 20:40:13.398020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.398047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.398219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.398244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.398414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.398439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.398612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.398637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.398808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.398833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.399008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.399034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.399211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.399237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.399407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.399433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.399608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.399633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.399774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.399805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.399976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.400002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.400175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.400200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.400377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.400402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.400578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.400603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.400771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.400800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.401020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.401047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.401216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.401242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.401405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.401433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.401620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.401646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.401810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.401838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.402039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.402065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.402262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.402287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.402463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.402489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.402661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-15 20:40:13.402686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-15 20:40:13.402890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.402934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.403074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.403101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.403299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.403325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.403504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.403530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.403710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.403735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.403884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.403910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.404079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.404105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.404246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.404271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.404415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.404440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.404609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.404636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.404812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.404837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.404994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.405020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.405202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.405228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.405397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.405423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.405621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.405647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.405847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.405884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.406070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.406095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.406267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.406292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.406458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.406484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.406677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.406702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.406866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.406899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.407074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.407099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.407239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.407264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.407434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.407459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.407626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.407651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.407820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.407851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.408033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.408060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.408237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.408262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.408470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.408495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.408659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.408685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.408861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.408894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.409069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.409095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.409272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.409297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.409468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.409493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.409663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.409689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.409851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.409886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.410076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.410103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.410295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.410321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.410497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.410522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.410823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.410885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.411087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.411112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.411290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.411315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.411484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-15 20:40:13.411510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-15 20:40:13.411874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.411940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.412126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.412153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.412331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.412356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.412530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.412555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.412754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.412779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.412961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.412988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.413158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.413183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.413358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.413383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.413534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.413559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.413709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.413734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.413927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.413953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.414127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.414153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.414349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.414375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.414528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.414553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.414725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.414750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.414954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.414980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.415150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.415175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.415345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.415370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.415543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.415568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.415733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.415759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.415939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.415965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.416135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.416160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.416334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.416363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.416562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.416588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.416744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.416772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.416962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.416988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.417160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.417184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.417329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.417355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.417554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.417579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.417748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.417773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.417927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.417954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.418159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.418185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.418349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.418374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.418545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.418571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.418770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.418796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.418944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.418970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.419145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.419170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.419337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.419362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.419564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.419589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.419797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.419822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.419993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.420019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.420197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.420222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.420370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.420395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.420598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.420624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.420787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.420812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.420948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.420974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.421141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.421166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-15 20:40:13.421339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-15 20:40:13.421365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.421514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.421540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.421731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.421759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.421955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.421981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.422195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.422220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.422361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.422387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.422530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.422555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.422727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.422753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.422953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.422979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.423153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.423179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.423355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.423381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.423529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.423556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.423703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.423729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.423899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.423925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.424080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.424107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.424275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.424305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.424509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.424534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.424677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.424702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.424898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.424923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.425101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.425126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.425276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.425302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.425501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.425527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.425666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.425692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.425867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.425899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.426050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.426075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.426221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.426246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.426444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.426469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.426641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.426668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.426871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.426903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.427085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.427111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.427306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.427331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.427504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.427529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.427700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.427726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.427865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.427897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.428043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.428068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.428223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.428248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.428400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.428426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.428573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.428599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.428743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.428768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.428910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.428936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.429122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.429147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.429329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.429354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.429533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.429560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.429693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.429719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.429984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.430010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.430162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.430187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.430360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.430385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.430556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.430582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.430761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.430786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-15 20:40:13.430956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-15 20:40:13.430982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.431183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.431208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.431407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.431432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.431609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.431635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.431804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.431829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.432032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.432058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.432231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.432261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.432461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.432486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.432639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.432663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.432864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.432911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.433103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.433129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.433306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.433331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.433505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.433530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.433700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.433725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.433906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.433932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.434131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.434157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.434327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.434352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.434549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.434574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.434714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.434740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.434919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.434946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.435119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.435145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.435319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.435344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.435519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.435544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.435717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.435742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.435947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.435972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.436122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.436148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.436324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.436351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.436527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.436552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.436688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.436714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.436900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.436927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.437101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.437126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.437299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.437325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.437470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.437496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.437645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.437672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.437873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.437904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.438081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.438107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.438246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.438271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.438440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.438466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-15 20:40:13.438659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-15 20:40:13.438684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.438890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.438916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.439112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.439137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.439314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.439339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.439513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.439538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.439712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.439738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.439935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.439961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.440112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.440137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.440343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.440372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.440576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.440602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.440771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.440797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.440975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.441001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.441173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.441199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.441398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.441424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.441592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.441617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.441814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.441840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.442019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.442044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.442219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.442245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.442391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.442416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.442555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.442580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.442748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.442773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.442947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.442973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.443147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.443173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.443311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.443336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.443509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.443535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.443711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.443738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.443908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.443934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.444108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.444133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.444298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.444324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.444523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.444549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.444724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.444749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.444899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.444925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.445118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.445144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.445288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.445314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.445492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.445517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.445654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.445680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.445885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.445912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.446060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.446087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.446261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.446286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.446485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.446510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.446683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.446708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.446861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.446894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.447068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.447093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.447242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.447268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.447437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.447462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.152 [2024-07-15 20:40:13.447651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.152 [2024-07-15 20:40:13.447677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.152 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.447851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.447881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.448034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.448059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.448263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.448295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.448495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.448521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.448657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.448683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.448885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.448911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.449062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.449088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.449228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.449254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.449390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.449415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.449588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.449614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.449813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.449838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.449995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.450021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.450195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.450220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.450423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.450448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.450592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.450618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.450753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.450778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.450960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.450986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.451162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.451188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.451331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.451356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.451556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.451581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.451743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.451768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.451947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.451974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.452116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.452141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.452354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.452379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.452544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.452574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.452771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.452795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.452964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.452989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.453185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.453210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.453408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.453433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.453588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.453613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.453792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.453817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.453990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.454016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.454189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.454214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.454414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.454440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.454607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.454632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.454832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.454857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.455033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.455058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.455212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.455238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.455377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.455403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.455546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.455571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.455732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.455758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.455929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.455955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.456179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.456212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.456578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.456639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.456828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.456854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.457042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.457068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.457213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.457239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.457437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.457462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.457664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.457690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.457863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.153 [2024-07-15 20:40:13.457908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.153 qpair failed and we were unable to recover it. 00:34:35.153 [2024-07-15 20:40:13.458070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.458095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.458250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.458277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.458446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.458471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.458639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.458664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.458859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.458891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.459072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.459098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.459270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.459296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.459468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.459494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.459668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.459694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.459875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.459915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.460079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.460104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.460268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.460293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.460475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.460500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.460674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.460699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.460872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.460914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.461059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.461084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.461268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.461294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.461480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.461505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.461675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.461699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.461901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.461927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.462074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.462099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.462295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.462321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.462515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.462540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.462709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.462734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.462904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.462931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.463107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.463133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.463304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.463329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.463528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.463553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.463695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.463720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.463891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.463917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.464069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.464095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.464246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.464272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.464438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.464468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.464682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.464709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.464888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.464913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.465079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.465104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.465268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.465294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.465464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.465489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.465627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.465652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.465799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.465824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.466006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.466032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.466207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.466233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.154 [2024-07-15 20:40:13.466376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.154 [2024-07-15 20:40:13.466401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.154 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.466546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.466571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.466718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.466744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.466921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.466947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.467123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.467159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.467313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.467339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.467485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.467511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.467680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.467706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.467890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.467921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.468121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.468146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.468320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.468347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.468550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.468577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.468715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.468741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.468887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.468922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.469090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.469115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.469318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.469344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.469515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.469541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.469719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.469744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.469898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.469924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.470100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.470125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.470294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.470319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.470494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.470520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.470687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.470712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.470915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.470947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.471147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.471172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.471344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.471371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.471545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.471571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.471746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.471773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.471948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.471974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.472115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.472142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.472312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.472339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.472510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.472537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.472687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.472712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.472911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.472937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.473083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.473108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.473287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.473314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.473512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.473538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.473678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.473704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.473883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.473910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.474088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.474114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.474283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.474309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.474507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.474533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.474706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.474732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.474906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.474932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.475106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.475132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.475308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.475333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.475525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.155 [2024-07-15 20:40:13.475551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.155 qpair failed and we were unable to recover it. 00:34:35.155 [2024-07-15 20:40:13.475696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.475722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.475891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.475921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.476063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.476089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.476244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.476269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.476473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.476499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.476639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.476665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.476865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.476897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.477049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.477074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.477240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.477266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.477438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.477464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.477636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.477665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.477839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.477865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.478020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.478048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.478216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.478242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.478403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.478430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.478602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.478627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.478824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.478852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.479053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.479078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.479245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.479270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.479438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.479464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.479636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.479662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.479824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.479852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.480024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.480050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.480220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.480245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.480449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.480475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.480677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.480703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.480874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.480906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.481070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.481096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.481302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.481327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.481495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.481521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.481693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.481719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.481893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.481918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.482089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.482114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.482315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.482340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.482521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.482547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.482745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.482770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.482949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.482975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.483131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.483156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.483330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.483356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.483517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.483542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.483755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.483784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.483955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.483982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.484131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.484158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.484330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.484355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.484562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.484588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.484761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.484787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.484959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.484985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.485187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.485212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.485350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.156 [2024-07-15 20:40:13.485375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.156 qpair failed and we were unable to recover it. 00:34:35.156 [2024-07-15 20:40:13.485520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.485546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.485746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.485774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.485924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.485950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.486088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.486114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.486263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.486289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.486444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.486470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.486616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.486642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.486843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.486869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.487030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.487056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.487241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.487266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.487413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.487439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.487639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.487664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.487821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.487848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.488040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.488066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.488212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.488247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.488408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.488434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.488581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.488606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.488779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.488805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.488978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.489004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.489181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.489206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.489357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.489383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.489550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.489576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.489752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.489777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.489946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.489973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.490115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.490146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.490291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.490317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.490487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.490513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.490685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.490710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.490888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.490914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.491080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.491106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.491256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.491281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.491456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.491481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.491689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.491715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.491861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.491892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.492064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.492089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.492254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.492280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.492444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.492469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.492634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.492660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.492833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.492871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.493052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.493078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.493271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.493296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.157 [2024-07-15 20:40:13.493444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.157 [2024-07-15 20:40:13.493474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.157 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.493679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.493705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.493870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.493941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.494110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.494135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.494309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.494336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.494485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.494511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.494685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.494711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.494856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.494891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.495064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.495090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.495287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.495313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.495491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.495516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.495654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.495680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.495853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.495885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.496055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.496081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.496259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.496285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.496457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.496484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.496669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.496694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.496893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.496929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.497081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.497106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.497304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.497330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.497462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.497488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.497636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.497662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.497837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.497864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.498030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.498056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.498205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.498230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.498401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.498426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.498597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.498623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.498812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.498838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.498990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.499016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.499157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.499183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.499348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.499373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.499515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.499540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.499734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.499760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.499912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.499939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.500137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.500170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.500307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.500333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.500509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.500534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.500736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.500761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.500940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.500967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.501100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.501125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.501271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.501301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.501476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.501502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.501682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.501707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.501907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.501933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.502101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.158 [2024-07-15 20:40:13.502127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.158 qpair failed and we were unable to recover it. 00:34:35.158 [2024-07-15 20:40:13.502299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.502324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.502526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.502552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.502723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.502748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.502920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.502946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.503080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.503105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.503251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.503278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.503443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.503469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.503620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.503645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.503834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.503862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.504073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.504099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.504396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.504470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.504704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.504733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.504951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.504977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.505130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.505156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.505323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.505348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.505492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.505517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.505692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.505717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.505891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.505917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.506072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.506098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.506269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.506294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.506440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.506466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.506650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.506677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.506884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.506910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.507082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.507109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.507314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.507340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.507515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.507540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.507716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.507741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.507935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.507961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.508134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.508159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.508329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.508355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.508520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.508546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.508740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.508765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.508975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.509001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.509178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.509203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.509411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.509436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.509600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.509629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.509804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.509829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.510012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.510039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.510211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.510236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.510423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.510449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.510591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.510617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.510790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.510816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.511014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.511041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.511214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.511239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.511423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.511448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.511625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.511651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.511835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.511859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.512022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.512048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.159 [2024-07-15 20:40:13.512214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.159 [2024-07-15 20:40:13.512240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.159 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.512457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.512483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.512680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.512706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.512903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.512948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.513125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.513150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.513293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.513319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.513482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.513508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.513720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.513745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.513941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.513968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.514147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.514172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.514342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.514367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.514570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.514596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.514795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.514820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.514993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.515019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.515190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.515215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.515388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.515414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.515549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.515575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.515724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.515749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.515945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.515971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.516112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.516138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.516287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.516313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.516461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.516487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.516637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.516665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.516862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.516893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.517066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.517092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.517301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.517327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.517473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.517499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.517648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.517678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.517819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.517844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.518031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.518058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.518235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.518260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.518468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.518494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.518661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.518686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.518856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.518887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.519028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.519054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.519263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.519289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.519487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.519512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.519660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.519686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.519888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.519914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.520061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.520087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.520261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.520286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.520489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.520517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.520670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.520695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.520891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.520917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.521095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.521120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.521284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.521311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.521491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.521517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.521689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.521714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.521910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.521936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.522082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.522107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.522303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.522328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.522475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.522501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.522689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.522714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.522895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.522922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.523095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.523121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.523271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.523297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.160 [2024-07-15 20:40:13.523469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.160 [2024-07-15 20:40:13.523496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.160 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.523643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.523669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.523864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.523896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.524068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.524094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.524237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.524262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.524424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.524450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.524646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.524672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.524807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.524832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.524987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.525013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.525217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.525243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.525437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.525463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.525607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.525637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.525789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.525815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.526015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.526041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.526224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.526249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.526398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.526423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.526599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.526625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.526814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.526843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.527044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.527069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.527265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.527291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.527488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.527514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.527683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.527712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.527961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.527987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.528129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.528156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.528341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.528366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.528576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.528603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.528780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.528805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.528985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.529011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.529185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.529210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.529380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.529406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.529576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.529602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.529753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.529779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.529974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.529999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.530151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.530177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.530370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.530395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.530571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.530596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.530769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.530794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.530972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.530998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.531198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.531223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.531363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.531388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.531599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.531627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.531817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.531843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.531995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.532022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.532210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.532237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.532434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.532460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.532658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.532684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.532888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.532914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.533057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.533083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.533231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.533256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.161 [2024-07-15 20:40:13.533433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.161 [2024-07-15 20:40:13.533458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.161 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.533662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.533687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.533874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.533920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.534098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.534124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.534267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.534293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.534468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.534493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.534666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.534691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.534861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.534894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.535075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.535101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.535275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.535301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.535477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.535502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.535672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.535697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.535874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.535906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.536049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.536075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.536233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.536258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.536440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.536465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.536615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.536641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.536778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.536803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.536973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.537000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.537142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.537168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.537355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.537381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.537584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.537609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.537757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.537783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.537968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.537994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.538186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.538211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.538382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.538407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.538578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.538603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.538800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.538824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.538976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.539002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.539201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.539227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.539393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.539418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.539623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.539648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.539818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.539844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.540010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.540036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.540212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.540237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.540433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.540458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.540632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.540658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.540830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.540856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.541009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.541034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.541170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.541195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.541375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.541401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.541599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.541625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.541797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.541829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.542022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.542048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.542199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.542224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.542396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.542422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.542598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.542624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.542826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.542854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.543087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.543113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.543287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.162 [2024-07-15 20:40:13.543313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.162 qpair failed and we were unable to recover it. 00:34:35.162 [2024-07-15 20:40:13.543484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.543524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.543735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.543762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.543928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.543954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.544149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.544175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.544322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.544348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.544517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.544543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.544747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.544773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.544948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.544974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.545147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.545173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.545311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.545338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.545519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.545545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.545680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.545716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.545898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.545924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.546071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.546096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.546272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.546299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.546497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.546534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.546714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.546740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.546910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.546936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.547089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.547114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.547311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.547337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.547484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.547509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.547681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.547706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.547856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.547887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.548063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.548088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.548255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.548281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.548480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.548505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.548678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.548704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.548882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.548908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.549087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.549112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.549282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.549307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.549482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.549509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.549692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.549717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.549903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.549934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.550086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.550112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.550281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.550306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.550477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.550502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.550663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.550689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.550834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.550859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.551070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.551096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.551298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.551324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.551463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.551488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.551681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.551706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.551874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.551907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.552078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.552104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.552241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.552266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.552436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.552461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.552666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.552692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.552863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.552899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.553087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.553112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.163 qpair failed and we were unable to recover it. 00:34:35.163 [2024-07-15 20:40:13.553275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.163 [2024-07-15 20:40:13.553301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.553478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.553504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.553703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.553728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.553899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.553925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.554124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.554150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.554329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.554355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.554523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.554549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.554719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.554745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.554928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.554955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.555137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.555163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.555340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.555365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.555533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.555559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.555766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.555791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.555957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.555984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.556163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.556188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.556344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.556370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.556541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.556566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.556710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.556737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.556888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.556914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.557082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.557107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.557255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.557281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.557494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.557520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.557689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.557716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.557903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.557934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.558076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.558102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.558295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.558321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.558521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.558547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.558719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.558745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.558946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.558972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.559140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.559166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.559365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.559390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.559558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.559584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.559744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.559770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.559944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.559971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.560107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.560132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.560327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.560353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.560530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.560556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.560695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.560721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.560890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.560916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.561077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.561102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.561241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.561267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.561436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.561462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.561634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.561660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.561832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.561857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.562034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.562060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.562266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.562292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.562464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.562490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.562659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.562685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.562866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.562898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.563100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.563126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.563277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.563303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.563476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.563502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.563685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.563712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.563918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.563944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.564092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.564118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.164 [2024-07-15 20:40:13.564322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.164 [2024-07-15 20:40:13.564347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.164 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.564523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.564548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.564751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.564777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.564930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.564956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.565108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.565134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.565306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.565331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.565469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.565494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.565665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.565691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.565899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.565936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.566135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.566161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.566361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.566386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.566560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.566587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.566726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.566752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.566947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.566974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.567173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.567198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.567397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.567422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.567572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.567597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.567762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.567788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.567933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.567960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.568101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.568126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.568276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.568302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.568456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.568481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.568643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.568670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.568864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.568896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.569092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.569118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.569302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.569328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.569532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.569558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.569728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.569754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.569927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.569952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.570094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.570121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.570319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.570346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.570542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.570567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.570741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.570766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.570935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.570961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.571159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.571185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.571384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.571410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.571547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.571572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.571774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.571800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.571955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.571982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.572132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.572158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.572356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.572382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.572527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.572553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.572757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.572783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.572981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.573007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.573183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.573208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.573380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.165 [2024-07-15 20:40:13.573405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.165 qpair failed and we were unable to recover it. 00:34:35.165 [2024-07-15 20:40:13.573547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.573573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.573733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.573759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.573910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.573936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.574138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.574164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.574351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.574376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.574560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.574586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.574726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.574751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.574916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.574944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.575120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.575145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.575312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.575338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.575493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.575518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.575666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.575703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.575941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.575967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.576161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.576186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.576387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.576413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.576585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.576610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.576763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.576790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.576932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.576958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.577144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.577169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.577340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.577365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.577537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.577563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.577738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.577764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.577914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.577941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.578141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.578166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.578342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.578368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.578566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.578591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.578790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.578815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.578984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.579010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.579209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.579235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.579420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.579449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.579622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.579649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.579848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.579884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.580108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.580133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.580307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.580332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.580505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.580530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.580736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.580764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.580953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.580979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.581143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.581168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.581335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.581361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.581525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.581550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.581733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.581759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.581929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.581955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.582104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.582129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.582275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.582301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.582499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.582525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.582718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.582744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.582893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.582920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.583113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.583138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.583336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.583362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.583554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.583580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.583730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.583756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.583920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.583946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.584126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.584152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.584325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.584351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.584520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.166 [2024-07-15 20:40:13.584545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.166 qpair failed and we were unable to recover it. 00:34:35.166 [2024-07-15 20:40:13.584684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.584710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.584915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.584950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.585094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.585120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.585299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.585325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.585493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.585519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.585694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.585720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.585893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.585930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.586099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.586124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.586265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.586291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.586493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.586519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.586734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.586762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.586931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.586958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.587135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.587161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.587358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.587384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.587536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.587565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.587704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.587729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.587905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.587932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.588100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.588126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.588296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.588321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.588470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.588496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.588692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.588718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.588854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.588887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.589057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.589084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.589265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.589290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.589441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.589466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.589665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.589691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.589905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.589948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.590094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.590120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.590310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.590335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.590533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.590559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.590728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.590753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.590903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.590930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.591123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.591156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.591333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.591359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.591556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.591582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.591756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.591782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.591979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.592005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.592204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.592231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.592438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.592464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.592617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.592642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.592846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.592874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.593082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.593112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.593467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.593530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.593746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.593775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.593965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.593991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.594129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.594155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.594304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.594331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.594505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.594531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.594699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.594724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.594903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.594930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.595131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.595156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.595356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.595381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.595529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.595554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.595741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.595766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.595918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.595948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.596127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.596153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.596328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.167 [2024-07-15 20:40:13.596354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.167 qpair failed and we were unable to recover it. 00:34:35.167 [2024-07-15 20:40:13.596554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.596580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.596728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.596753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.596920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.596946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.597095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.597120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.597322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.597348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.597522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.597547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.597700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.597726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.597929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.597956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.598124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.598150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.598334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.598360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.598552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.598578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.598720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.598746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.598900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.598934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.599133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.599158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.599325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.599351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.599520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.599546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.599747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.599773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.599972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.599998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.600135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.600160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.600308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.600333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.600531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.600556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.600771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.600800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.600977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.601004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.601214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.601239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.601419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.601445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.601643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.601669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.601856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.601891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.602113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.602145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.602328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.602354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.602525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.602550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.602720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.602745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.602887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.602913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.603089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.603115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.603289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.603314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.603481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.603506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.603677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.603702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.603887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.603914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.604087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.604116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.604314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.604339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.604489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.604516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.604680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.604706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.604884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.604910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.605110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.605135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.605312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.605339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.605539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.605565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.605943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.605969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.606165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.606190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.606354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.606380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.606578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.606603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.606773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.606799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.606945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.168 [2024-07-15 20:40:13.606971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.168 qpair failed and we were unable to recover it. 00:34:35.168 [2024-07-15 20:40:13.607150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.607176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.607349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.607375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.607522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.607547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.607746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.607772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.607941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.607968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.608117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.608143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.608287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.608312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.608487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.608512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.608684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.608709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.608867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.608905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.609069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.609095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.609252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.609277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.609425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.609451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.609657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.609683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.609849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.609875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.610067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.610092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.610289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.610315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.610483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.610509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.610664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.610689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.610853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.610886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.611072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.611098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.611299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.611325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.611473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.611498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.611669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.611695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.611933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.611959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.612140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.612166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.612364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.612393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.612569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.612594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.612768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.612794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.612970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.612997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.613173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.613199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.613350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.613375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.613518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.613544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.613755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.613781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.613927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.613953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.614106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.614132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.614304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.614330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.614530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.614556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.614726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.614751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.614896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.614922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.615112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.615137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.615341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.615367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.615500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.615526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.615720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.615745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.615920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.615947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.616124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.616150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.616321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.616346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.616510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.616535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.616706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.616731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.616946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.616973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.617119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.617145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.617284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.617310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.617486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.617511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.617660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.617686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.617862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.617906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.618083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.618108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.618278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.618304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.169 [2024-07-15 20:40:13.618473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.169 [2024-07-15 20:40:13.618499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.169 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.618641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.618668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.618844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.618869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.619048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.619073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.619246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.619272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.619447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.619473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.619645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.619670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.619809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.619834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.620047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.620074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.620243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.620273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.620457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.620483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.620655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.620681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.620841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.620869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.621070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.621096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.621246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.621271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.621443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.621469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.621607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.621633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.621779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.621805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.622009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.622035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.622173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.622200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.622404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.622430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.622598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.622625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.622773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.622798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.622953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.622979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.623158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.623184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.623359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.623385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.623558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.623584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.623754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.623779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.623933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.623960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.624158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.624183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.624352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.624378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.624549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.624574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.624770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.624799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.625018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.625045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.625193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.625218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.625387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.625412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.625615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.625641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.625811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.625839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.626015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.626041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.626205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.626230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.626436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.626462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.626606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.626633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.626771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.626796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.626966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.626992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.627159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.627185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.627354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.627380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.627528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.627553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.627706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.627731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.627913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.627941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.628147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.628176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.628353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.628380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.628525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.628551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.628754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.628780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.628956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.628981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.629148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.629173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.629377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.629413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.629578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.629610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.170 [2024-07-15 20:40:13.629747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.170 [2024-07-15 20:40:13.629773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.170 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.629945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.629972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.630145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.630170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.630316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.630342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.630518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.630545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.630693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.630731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.630921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.630947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.631115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.631141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.631313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.631338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.631510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.631536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.631735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.631765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.631949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.631976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.632118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.632144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.632344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.632369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.632543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.632568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.632741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.632766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.632968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.633004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.633200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.633227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.633400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.633426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.633571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.633597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.633767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.633792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.633966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.633992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.634161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.634187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.634329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.634354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.634544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.634575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.634779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.634808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.635009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.635035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.635243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.635271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.635444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.635480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.635633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.635659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.635853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.635893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.636100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.636136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.636311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.636341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.636486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.636511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.636707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.636733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.636897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.636927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.637108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.637137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.637320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.637347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.637499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.637524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.637698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.637724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.637866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.637899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.638055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.638080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.638278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.638304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.638509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.638536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.638731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.638757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.638926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.638952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.639097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.639123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.639323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.639349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.639544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.639575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.639745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.639771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.639947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.639974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.171 [2024-07-15 20:40:13.640151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.171 [2024-07-15 20:40:13.640178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.171 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.640346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.640372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.640563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.640588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.640730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.640756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.640940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.640967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.641110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.641136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.641308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.641334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.641498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.641524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.641680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.641707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.641932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.641961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.642146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.642175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.642420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.642450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.642703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.642732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.642904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.642931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.643101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.643149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.643327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.643363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.643547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.643573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.643783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.643808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.644013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.644040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.644190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.644215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.644381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.644408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.644581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.644620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.644793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.644819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.644990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.645016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.645167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.645193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.645368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.645394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.645596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.645623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.645828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.172 [2024-07-15 20:40:13.645857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.172 qpair failed and we were unable to recover it. 00:34:35.172 [2024-07-15 20:40:13.646062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-15 20:40:13.646089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-15 20:40:13.646329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-15 20:40:13.646354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-15 20:40:13.646523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.646552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.646765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.646791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.646969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.646996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.647156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.647182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.647354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.647379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.647531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.647557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.647727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.647756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.647976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.648003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.648158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.648185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.648374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.648400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.648574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.648601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.648773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.648798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.648941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.648968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.649174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.649200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.649354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.649381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.649537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.649564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.649710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.649743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.649897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.649923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.650109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.650135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.650328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.650354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.650501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.650526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.650698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.650723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.650915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.650944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.651143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.651170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.651347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.651373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.651544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.651569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.651764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.651792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.651986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.652012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.652187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.652225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.652396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.652423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.652623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.652649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.652851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.652895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.653090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.653116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.653286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-15 20:40:13.653311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-15 20:40:13.653511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.653537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.653712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.653742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.653933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.653960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.654129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.654155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.654360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.654387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.654539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.654565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.654739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.654764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.654935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.654961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.655145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.655173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.655341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.655369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.655531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.655557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.655728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.655753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.655928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.655954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.656122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.656148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.656322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.656358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.656531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.656557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.656706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.656734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.656906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.656933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.657083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.657109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.657305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.657331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.657506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.657532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.657695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.657723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.657914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.657941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.658119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.658151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.658311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.658338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.658505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.658531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.658667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.658693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.658891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.658917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.659069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.659095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.659300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.659326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.659519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.659555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.659756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.659782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.659986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.660014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.660160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.660187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.660371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.660396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-15 20:40:13.660595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-15 20:40:13.660620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.660804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.660842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.661035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.661066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.661244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.661271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.661435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.661461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.661606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.661640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.661835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.661861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.662047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.662074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.662276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.662309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.662493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.662521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.662692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.662719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.662907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.662934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.663109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.663135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.663281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.663306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.663477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.663503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.663709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.663736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.663929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.663956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.664127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.664153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.664353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.664379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.664626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.664656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.664853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.664889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.665062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.665089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.665283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.665313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.665504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.665533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.665712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.665740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.665946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.665974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.666124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.666150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.666356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.666386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.666579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.666609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.666831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.666858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.667072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.667098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.667271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.667296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.667509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.667548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.667741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.667770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-15 20:40:13.667961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-15 20:40:13.667988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.668135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.668178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.668375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.668416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.668609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.668640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.668844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.668874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.669079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.669111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.669333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.669361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.669576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.669612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.669849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.669901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.670070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.670095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.670283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.670321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.670538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.670568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.670751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.670778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.670985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.671019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.671222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.671252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.671439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.671468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.671775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.671826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.672037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.672073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.672238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.672264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.672485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.672516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.672703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.672734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.672954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.672981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.673143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.673191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.673412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.673449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.673716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.673771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.674021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.674049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.674313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.674343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.674507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.674544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.674751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.674778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.674976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.675002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.675173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.675271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.675469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.675509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.675682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-15 20:40:13.675708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-15 20:40:13.675903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.675933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.676157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.676192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.676413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.676443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.676611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.676638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.676810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.676835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.677071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.677101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.677313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.677343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.677526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.677553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.677746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.677774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.677983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.678013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.678198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.678228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.678422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.678447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.678604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.678634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.678828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.678858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.679083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.679112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.679333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.679364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.679540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.679570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.679790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.679820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.680032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.680059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.680227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.680253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.680529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.680579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.680738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.680768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.680977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.681003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.681163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.681190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.681373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.681409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.681563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.681589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.681761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.681794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.681972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.681999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.682153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.682181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.682356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.682388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.682602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-15 20:40:13.682628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-15 20:40:13.682807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.682833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.683026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.683053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.683227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.683253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.683451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.683476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.683647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.683681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.683866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.683900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.684071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.684097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.684272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.684297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.684492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.684517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.684683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.684707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.684901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.684927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.685133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.685163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.685311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.685338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.685640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.685696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.685921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.685948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.686101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.686126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.686352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.686382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.686654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.686705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.686899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.686935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.687144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.687171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.687449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.687501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.687848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.687925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.688154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.688185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.688376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.688404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.688699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-15 20:40:13.688751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-15 20:40:13.688957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.688985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.689156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.689194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.689421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.689467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.689678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.689710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.689888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.689930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.690103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.690129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.690294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.690324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.690602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.690652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.690870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.690914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.691114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.691145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.691336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.691365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.691724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.691773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.691990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.692016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.692218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.692248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.692434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.692464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.692677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.692706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.692893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.692937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.693154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.693184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.693400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.693428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.693611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.693638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.693809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.693835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.694046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.694076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.694252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.694292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.694611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.694662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.694853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.694886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.695080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.695109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.695335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.695371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-15 20:40:13.695617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-15 20:40:13.695644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.695848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.695874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.696063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.696100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.696313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.696353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.696705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.696755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.696962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.696989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.697160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.697190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.697434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.697474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.697748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.697805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.698012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.698038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.698248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.698278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.698490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.698519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.698701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.698728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.698901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.698946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.699122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.699152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.699359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.699387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.699726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.699793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.700021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.700051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.700270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.700300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.700556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.700585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.700784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.700811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.700985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.701011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.701211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.701241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.701436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.701475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.701673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.701700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.701858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.701897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.702106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.702135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.702355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.702386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-15 20:40:13.702694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-15 20:40:13.702752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.702992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.703019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.703218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.703244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.703389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.703415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.703602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.703629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.703778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.703820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.704004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.704030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.704228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.704260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.704443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.704470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.704621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.704646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.704825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.704852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.705044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.705084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.705242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.705269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.705434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.705460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.705634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.705661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.705848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.705881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.706058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.706084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.706299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.706324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.706498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.706524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.706655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.706680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.706856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.706898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.707059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.707085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.707262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.707289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.707457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.707482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.707661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.707688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.707869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.707903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.708056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.708082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.708253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.708278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-15 20:40:13.708481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-15 20:40:13.708507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.708713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.708740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.708887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.708928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.709133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.709159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.709337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.709363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.709540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.709566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.709727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.709754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.709914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.709941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.710092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.710118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.710272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.710302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.710449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.710475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.710628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.710654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.710821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.710847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.711008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.711035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.711214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.711240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.711414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.711441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.711624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.711651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.711854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.711921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.712120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.712146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.712346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.712371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.712547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.712585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.712763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.712796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.712957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.712994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.713190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.713220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.713368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.713394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.713603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.713630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.713799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.713825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.714019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-15 20:40:13.714046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-15 20:40:13.714222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.714248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.714419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.714444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.714615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.714642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.714842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.714872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.715081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.715107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.715294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.715323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.715514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.715541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.715687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.715713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.715864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.715898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.716090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.716116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.716282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.716309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.716483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.716510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.716687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.716713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.716891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.716937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.717084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.717111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.717281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.717307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.717477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.717503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.717714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.717741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.717913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.717939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.718111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.718137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.718347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.718373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.718513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.718538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.718727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.718755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.718918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.718944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.719160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.719187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.719338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.719365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.719565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.719591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.719755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-15 20:40:13.719780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-15 20:40:13.719987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.720014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.720192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.720218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.720360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.720386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.720531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.720558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.720786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.720816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.720993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.721029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.721213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.721239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.721388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.721419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.721595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.721621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.721782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.721808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.721955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.721981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.722185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.722212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.722386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.722412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.722630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.722656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.722827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.722856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.723054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.723081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.723253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.723279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.723463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.723489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.723664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.723700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.723870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.723921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-15 20:40:13.724084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-15 20:40:13.724111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.724312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.724338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.724485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.724513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.724691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.724718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.724864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.724898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.725077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.725104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.725271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.725298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.725441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.725467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.725637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.725663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.725813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.725839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.725990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.726017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.726187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.726214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.726413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.726440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.726629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.726658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.726890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.726921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.727149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.727176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.727362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.727388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.727586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.727613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.727780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.727806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.727975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.728001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.728179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.728206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.728407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.728433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.728589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.728616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.728789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.728816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.729012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.729041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.729202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.729229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.729408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.729436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.729642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.729672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.729840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.729866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.730044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.730070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.730253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.730286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.730464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.730491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.730664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.730690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.730901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.730932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.731086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.731111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.731260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.731285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.731465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.731497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-15 20:40:13.731671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-15 20:40:13.731698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.731907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.731933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.732081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.732107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.732316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.732342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.732534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.732561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.732708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.732734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.732918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.732946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.733125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.733151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.733326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.733352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.733500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.733527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.733724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.733756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.733924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.733951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.734152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.734177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.734376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.734405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.734594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.734620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.734792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.734818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.734966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.734992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.735136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.735162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.735339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.735366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.735544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.735571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.735773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.735798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.735950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.735977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.736153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.736179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.736403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.736435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.736604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.736630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.736802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.736828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.737001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.737028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.737182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.737208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.737408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.737442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.737631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.737657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.737851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.737891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.738075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.738101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.738266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.738291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.738439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.738466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.738650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.738686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.738871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.738916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.739074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.739099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.739244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.739269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.739413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.739448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.739660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-15 20:40:13.739687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-15 20:40:13.739824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.739848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.740053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.740079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.740235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.740261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.740463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.740489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.740678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.740705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.740890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.740928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.741108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.741135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.741309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.741336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.741502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.741527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.741709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.741736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.741922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.741950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.742148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.742174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.742323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.742354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.742534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.742561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.742736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.742762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.742934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.742961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.743139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.743164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.743348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.743375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.743559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.743586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.743733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.743759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.743965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.744002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.744157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.744184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.744328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.744364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.744541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.744567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.744743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.744779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.744948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.744975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.745175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.745201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.745386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.745412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.745587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.745614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.745793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.745829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.745980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.746011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.746188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.746214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.746387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.746415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.746551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.746578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.746754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.746779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.746927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.746954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.747159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.747186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-15 20:40:13.747361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-15 20:40:13.747388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.747533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.747558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.747759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.747785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.747994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.748022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.748176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.748203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.748339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.748365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.748549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.748580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.748784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.748814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.748972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.748998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.749175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.749200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.749421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.749448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.749597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.749623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.749769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.749795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.749998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.750026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.750208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.750241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.750430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.750455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.750633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.750659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.750838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.750865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.751072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.751099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.751296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.751322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.751500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.751527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.751666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.751690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.751903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.751936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.752092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.752124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.752303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.752329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.752477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.752504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.752671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.752697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.752857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.752895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.753066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.753092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.753238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.753264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.753447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.753473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.753641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.753668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.753807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.753835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.754014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.754045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.754225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.754250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.754425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.754452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.754619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.754645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.754822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.754848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.755043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.755074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-15 20:40:13.755270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-15 20:40:13.755296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.755475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.755503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.755641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.755675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.755858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.755892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.756045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.756072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.756217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.756244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.756413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.756438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.756651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.756677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.756842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.756868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.757052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.757080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.757277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.757303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.757503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.757535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.757779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.757808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.758009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.758039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.758186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.758212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.758417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.758444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.758611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.758638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.758853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.758898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.759106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.759136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.759505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.759562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.759784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.759814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.760066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.760097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.760310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.760340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.760700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.760758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.760964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.760992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.761137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.761164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.761337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.761363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.761531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.761558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.761760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.761786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.761986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.762013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.762215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.762242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-15 20:40:13.762421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-15 20:40:13.762448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.762644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.762671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.762824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.762851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.763033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.763064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.763234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.763260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.763463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.763500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.763653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.763679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.763889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.763934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.764082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.764110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.764284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.764310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.764471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.764500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.764703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.764729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.764872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.764905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.765109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.765143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.765349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.765374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.765523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.765550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.765752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.765778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.765937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.765964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.766141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.766169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.766302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.766328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.766477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.766503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.766678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.766703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.766882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.766919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.767097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.767123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.767301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.767327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.767497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.767523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.767682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.767709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.767843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.767868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.768054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.768080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.768288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.768313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.768519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.768545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.768687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.768713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.768865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.768899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.769082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.769109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.769258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.769284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.769469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.769506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.769684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.769710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.769908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.769935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.770108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.770135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-15 20:40:13.770305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-15 20:40:13.770330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.770505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.770530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.770709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.770736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.770917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.770943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.771078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.771108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.771284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.771310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.771484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.771512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.771680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.771705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.771856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.771887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.772072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.772100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.772280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.772305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.772445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.772470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.772668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.772695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.772853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.772885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.773066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.773093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.773236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.773262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.773408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.773434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.773619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.773645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.773854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.773892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.774064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.774090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.774290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.774317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.774499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.774525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.774719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.774747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.774967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.774994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.775161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.775188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.775338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.775366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.775534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.775560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.775716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.775743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.775915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.775942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.776090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.776116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.776300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.776327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.776515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.776542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.776740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.776769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-15 20:40:13.776964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-15 20:40:13.777000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.777183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.777210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.777358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.777384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.777557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.777586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.777795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.777829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.778012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.778039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.778217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.778244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.778420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.778453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.778635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.778661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.778868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.778902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.779084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.779110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.779313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.779345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.779569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.779596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.779744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.779771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.779978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.780004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.780178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.780205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.780383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.780410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.780576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.780605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.780772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.780799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.780953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.780980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.781177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.781204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.781378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.781404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.781552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.781578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.781751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.781778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.781915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.781952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.782116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.782141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.782345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.782373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.782570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.782597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.782795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.782821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.782999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.783027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.783175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.783201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.783369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.783394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.783565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.783602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.783769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.783798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.784019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.784049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.784296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.784326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.784550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.784577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.784758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.784784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.784988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.785015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.785187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-15 20:40:13.785212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-15 20:40:13.785368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.785394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.785569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.785596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.785795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.785820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.785965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.785991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.786188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.786214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.786412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.786438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.786573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.786599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.786752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.786778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.786981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.787007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.787178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.787204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.787349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.787374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.787515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.787546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.787726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.787752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.787923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.787950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.788100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.788126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.788296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.788322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.788506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.788531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.788704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.788730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.788873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.788906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.789075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.789101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.789270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.789296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.789480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.789505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.789653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.789679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.789854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.789886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.790067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.790092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.790267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-15 20:40:13.790293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-15 20:40:13.790439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.790464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.790634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.790659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.790836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.790862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.791030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.791056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.791202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.791228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.791371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.791397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.791575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.791601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.791780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.791806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.791977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.792004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.792144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.792169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.792368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.792394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.792592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.792617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.792810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.792840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.793004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.793030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.793207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.793234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.793401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.793427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.793566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.793593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.793762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.793787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.793987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.794013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.794191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.794216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.794364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.794389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.794563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.794589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.794756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.794781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.794975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.795001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.795149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.795174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.795367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.795392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.795545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.795571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.795739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.795766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.795939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.795965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.796113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.796138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.796312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.796338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.796508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.796533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.796763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.796792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-15 20:40:13.796990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-15 20:40:13.797016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.797188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.797214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.797353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.797380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.797528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.797553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.797723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.797748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.797931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.797957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.798140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.798166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.798314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.798340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.798534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.798559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.798733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.798758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.798904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.798930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.799105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.799130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.799330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.799355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.799554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.799580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.799774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.799802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.800014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.800040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.800187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.800212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.800387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.800413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.800579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.800604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.800818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.800850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.801053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.801079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.801228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.801254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.801431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.801457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.801604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.801630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.801773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.801798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.801977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.802004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.802156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.802182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.802365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.802406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.802651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.802679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.802899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.802950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.803105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.803131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.803304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.803330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.803501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.803527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.803671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.803696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.803863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.803899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.804067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.804093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-15 20:40:13.804264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-15 20:40:13.804289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.804490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.804516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.804671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.804696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.804897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.804927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.805080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.805105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.805263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.805289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.805465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.805491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.805641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.805668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.805863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.805896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.806051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.806077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.806251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.806277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.806476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.806502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.806642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.806667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.806839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.806864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.807043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.807069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.807246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.807272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.807445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.807470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.807647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.807673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.807866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.807904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.808123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.808149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.808296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.808321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.808499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.808525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.808698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.808724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.808872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.808909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.809090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.809116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.809287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.809313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.809487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.809512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.809682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.809708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.809874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.809908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.810085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.810110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.810277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.810303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.810472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.810497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.810663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.810688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.810889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.810938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.811110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.811136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.811287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.811314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.811498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.811524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.811678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.811704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.476 [2024-07-15 20:40:13.811852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.476 [2024-07-15 20:40:13.811885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.476 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.812067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.812092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.812262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.812287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.812491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.812517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.812699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.812724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.812898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.812931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.813117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.813143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.813296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.813323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.813527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.813552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.813721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.813749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.813917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.813943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.814163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.814189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.814370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.814395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.814571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.814597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.814743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.814769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.814971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.815004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.815143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.815168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.815367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.815392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.815562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.815587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.815763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.815789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.815939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.815965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.816133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.816159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.816294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.816320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.816495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.816522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.816696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.816722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.816861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.816898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.817070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.817095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.817246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.817271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.817427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.817452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.817625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.817652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.817820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.817845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.818001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.818027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.818206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.818232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.818386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.818411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.818582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.818608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.818788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.818813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.818981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.819007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.819141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.819166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.819335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.819361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.819570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.819596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.819741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.477 [2024-07-15 20:40:13.819766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.477 qpair failed and we were unable to recover it. 00:34:35.477 [2024-07-15 20:40:13.819944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.819970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.820143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.820168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.820309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.820335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.820503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.820529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.820667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.820693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.820892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.820918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.821097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.821122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.821318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.821344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.821522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.821547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.821712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.821737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.821937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.821963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.822140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.822167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.822364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.822390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.822553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.822578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.822742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.822768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.822963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.822989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.823169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.823194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.823386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.823411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.823588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.823614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.823809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.823834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.823979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.824005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.824176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.824202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.824376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.824402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.824601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.824627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.824789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.824822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.825012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.825038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.825217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.825243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.825380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.825406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.825578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.825603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.478 [2024-07-15 20:40:13.825752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.478 [2024-07-15 20:40:13.825780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.478 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.825977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.826003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.826179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.826205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.826375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.826400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.826565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.826591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.826789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.826815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.826983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.827012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.827165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.827191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.827331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.827356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.827502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.827528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.827705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.827730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.827934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.827960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.828113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.828138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.828311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.828337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.828508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.828533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.828705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.828731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.828883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.828909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.829110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.829136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.829302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.829327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.829481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.829506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.829682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.829707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.829857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.829889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.830045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.830070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.830246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.830271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.830442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.830467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.830636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.830661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.830835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.830860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.831059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.831086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.831256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.831282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.831486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.831511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.831859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.831926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.832124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.832151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.832356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.832382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.832579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.832604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.832781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.832807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.832956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.832985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.833160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.833185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.479 [2024-07-15 20:40:13.833388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.479 [2024-07-15 20:40:13.833413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.479 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.833554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.833580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.833754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.833780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.833938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.833964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.834117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.834143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.834310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.834336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.834511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.834536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.834734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.834759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.834961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.834987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.835287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.835344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.835579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.835607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.835824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.835852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.836086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.836112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.836306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.836331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.836538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.836563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.836706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.836731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.836942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.836968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.837120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.837146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.837326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.837351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.837500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.837527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.837702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.837728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.837892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.837945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.838148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.838173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.838372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.838398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.838601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.838627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.838801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.838826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.838997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.839024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.839195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.839221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.839398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.839424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.839620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.839646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.839815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.839840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.840027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.840054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.840223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.840249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.840447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.840472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.840652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.840677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.840845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.840870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.841055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.841081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.841278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.480 [2024-07-15 20:40:13.841304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.480 qpair failed and we were unable to recover it. 00:34:35.480 [2024-07-15 20:40:13.841453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.841482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.841658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.841684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.841851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.841886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.842100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.842128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.842345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.842370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.842570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.842598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.842788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.842817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.842980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.843006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.843205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.843230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.843367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.843393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.843589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.843657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.843902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.843946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.844097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.844123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.844323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.844348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.844527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.844553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.844689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.844714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.844865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.844899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.845098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.845123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.845288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.845314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.845487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.845512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.845685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.845710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.845847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.845873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.846066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.846092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.846273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.846299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.846479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.846504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.846656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.846681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.846852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.846897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.847080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.847105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.847265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.847291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.847454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.847480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.847683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.847708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.847853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.847887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.848086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.848112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.848262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.848287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.848493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.848519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.848692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.848717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.848888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.481 [2024-07-15 20:40:13.848915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.481 qpair failed and we were unable to recover it. 00:34:35.481 [2024-07-15 20:40:13.849068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.849093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.849260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.849285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.849484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.849509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.849680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.849709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.849888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.849914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.850067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.850093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.850237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.850262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.850439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.850464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.850631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.850657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.850829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.850855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.851037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.851064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.851265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.851290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.851453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.851479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.851734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.851787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.851958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.851985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.852183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.852208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.852358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.852383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.852563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.852589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.852784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.852809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.852987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.853013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.853209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.853234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.853411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.853437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.853618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.853644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.853805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.853830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.853980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.854005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.854181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.854207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.854380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.854405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.854569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.854594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.854759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.854784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.854986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.855012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.855163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.855189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.855378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.855404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.855572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.855598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.855767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.855792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.855947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.855974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.856114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.856139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.856308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.856333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.856469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.856495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.856632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.856659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.482 [2024-07-15 20:40:13.856808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.482 [2024-07-15 20:40:13.856834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.482 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.857014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.857040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.857213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.857239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.857385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.857411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.857607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.857637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.857832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.857862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.858043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.858069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.858241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.858267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.858442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.858468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.858664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.858689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.858868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.858911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.859088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.859114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.859280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.859306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.859506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.859531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.859701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.859727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.859899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.859926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.860062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.860088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.860258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.860283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.860500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.860526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.860721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.860747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.860917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.860944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.861084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.861110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.861282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.861308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.861508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.861533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.861704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.861729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.861927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.861953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.862125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.862151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.862321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.862346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.862517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.483 [2024-07-15 20:40:13.862543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.483 qpair failed and we were unable to recover it. 00:34:35.483 [2024-07-15 20:40:13.862748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.862773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.862973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.862999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.863176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.863202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.863352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.863378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.863580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.863606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.863754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.863780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.863950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.863976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.864153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.864179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.864354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.864379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.864554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.864580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.864756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.864781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.864965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.864991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.865129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.865154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.865317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.865343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.865512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.865538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.865699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.865728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.865873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.865904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.866111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.866136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.866279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.866304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.866489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.866514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.866683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.866709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.866902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.866928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.867068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.867093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.867294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.867319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.867491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.867517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.867693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.867719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.867927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.867953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.868123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.868149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.868294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.868320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.868526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.868552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.868697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.868724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.868926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.868952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.869091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.869116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.869315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.869341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.869505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.869531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.869728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.869754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.869907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.869933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.870138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.870163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.870333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.870358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.870528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.870554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.484 qpair failed and we were unable to recover it. 00:34:35.484 [2024-07-15 20:40:13.870754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.484 [2024-07-15 20:40:13.870779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.870993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.871020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.871167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.871193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.871386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.871411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.871555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.871580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.871745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.871773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.871975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.872004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.872245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.872273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.872475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.872500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.872646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.872672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.872837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.872863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.873015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.873040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.873241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.873267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.873469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.873495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.873667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.873692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.873870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.873906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.874056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.874081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.874282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.874307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.874475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.874500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.874674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.874699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.874898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.874927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.875130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.875156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.875299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.875325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.875477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.875502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.875701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.875727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.875897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.875926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.876105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.876132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.876303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.876329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.876498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.876523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.876701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.876726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.876900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.876926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.877092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.877117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.877262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.877287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.877459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.877485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.877652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.877677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.485 qpair failed and we were unable to recover it. 00:34:35.485 [2024-07-15 20:40:13.877850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.485 [2024-07-15 20:40:13.877883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.878034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.878059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.878234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.878261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.878415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.878441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.878615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.878641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.878793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.878819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.879052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.879084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.879274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.879300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.879478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.879505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.879703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.879728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.879909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.879936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.880081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.880106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.880281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.880309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.880480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.880505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.880696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.880724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.880905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.880960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.881141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.881169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.881338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.881375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.881531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.881557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.881753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.881779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.881982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.882013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.882182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.882209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.882392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.882419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.882602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.882629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.882802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.882827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.883022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.883053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.883218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.883249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.883445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.883473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.883664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.883690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.883899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.883929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.884146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.884175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.884399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.884428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.884639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.884666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.884846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.884886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.885066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.885095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.885295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.885325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.885560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.885587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.885761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.885789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.886012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.886042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.486 [2024-07-15 20:40:13.886226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.486 [2024-07-15 20:40:13.886262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.486 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.886489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.886523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.886740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.886769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.886990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.887019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.887233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.887263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.887461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.887488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.887656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.887681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.887850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.887901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.888061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.888089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.888306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.888333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.888498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.888528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.888751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.888780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.888962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.888992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.889191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.889218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.889413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.889444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.889652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.889681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.889898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.889929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.890100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.890127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.890285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.890314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.890540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.890570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.890760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.890791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.890996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.891038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.891218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.891244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.891434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.891464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.891687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.891717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.891913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.891939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.892146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.892187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.892413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.892438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.892661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.892689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.892884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.892911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.893086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.893116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.893306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.893335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.893520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.893550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.893715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.893741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.893908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.487 [2024-07-15 20:40:13.893938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.487 qpair failed and we were unable to recover it. 00:34:35.487 [2024-07-15 20:40:13.894170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.894200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.894349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.894377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.894543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.894570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.894759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.894797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.895017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.895047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.895211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.895239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.895437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.895464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.895661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.895690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.895923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.895965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.896139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.896182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.896352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.896379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.896549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.896574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.896768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.896798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.896970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.897000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.897146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.897173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.897319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.897345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.897530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.897560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.897775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.897805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.898002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.898038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.898240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.898270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.898460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.898490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.898675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.898703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.898941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.898968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.899167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.899197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.899395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.899425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.899641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.899670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.899869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.899903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.900103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.900132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.900326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.900354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.900558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.900588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.900789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.900816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.900975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.901004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.901194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.901234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.901429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.901458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.901621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.901646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.901864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.901912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.488 qpair failed and we were unable to recover it. 00:34:35.488 [2024-07-15 20:40:13.902115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.488 [2024-07-15 20:40:13.902144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.902365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.902394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.902616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.902643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.902865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.902906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.903092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.903118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.903349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.903378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.903594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.903621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.903781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.903810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.904036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.904069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.904291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.904322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.904523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.904549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.904701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.904726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.904925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.904953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.905163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.905193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.905367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.905393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.905586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.905616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.905817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.905847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.906079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.906109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.906285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.906313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.906514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.906544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.906741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.906771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.906999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.907031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.907237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.907263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.907429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.907469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.907633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.907662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.907857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.907894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.908069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.908095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.908290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.908332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.908535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.908564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.908755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.908785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.909011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.909043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.909278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.909306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.909483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.909513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.909727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.909754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.909950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.909977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.910150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.910179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.910370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.910399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.910604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.910630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.910768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.910795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.910992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.489 [2024-07-15 20:40:13.911026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.489 qpair failed and we were unable to recover it. 00:34:35.489 [2024-07-15 20:40:13.911243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.911273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.911477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.911509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.911729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.911755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.911897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.911925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.912153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.912182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.912400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.912429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.912602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.912628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.912851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.912890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.913132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.913162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.913352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.913382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.913580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.913614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.913790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.913818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.914027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.914061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.914253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.914291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.914468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.914494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.914644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.914687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.914886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.914915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.915087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.915120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.915317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.915344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.915539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.915567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.915749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.915779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.916007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.916034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.916237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.916263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.916488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.916517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.916713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.916743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.916959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.916989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.917162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.917188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.917389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.917415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.917627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.917657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.917840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.917869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.918054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.918081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.918312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.918342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.918563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.918593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.918812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.918853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.919044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.919071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.919243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.919269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.490 qpair failed and we were unable to recover it. 00:34:35.490 [2024-07-15 20:40:13.919450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.490 [2024-07-15 20:40:13.919481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.919711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.919739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.919906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.919933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.920080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.920116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.920321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.920349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.920562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.920591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.920770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.920796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.920965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.920994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.921190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.921220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.921402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.921432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.921600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.921626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.921861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.921897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.922117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.922158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.922357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.922387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.922546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.922571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.922752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.922786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.922938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.922964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.923178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.923208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.923402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.923427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.923592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.923622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.923850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.923883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.924077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.924109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.924300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.924325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.924491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.924519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.924713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.924742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.924959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.924988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.925156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.925183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.925374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.925403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.925628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.925654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.925798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.925824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.926019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.926045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.926216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.926245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.926442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.926470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.926624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.926652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.926835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.926860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.927065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.927094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.927279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.927308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.491 qpair failed and we were unable to recover it. 00:34:35.491 [2024-07-15 20:40:13.927461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.491 [2024-07-15 20:40:13.927489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.927679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.927704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.927864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.927906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.928099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.928128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.928344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.928368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.928538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.928563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.928727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.928752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.928972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.929001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.929191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.929219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.929412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.929438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.929637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.929665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.929866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.929902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.930093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.930122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.930311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.930337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.930556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.930584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.930774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.930802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.931021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.931049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.931246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.931272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.931440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.931468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.931666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.931695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.931912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.931942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.932130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.932155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.932371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.932400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.932569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.932598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.932812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.932844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.933027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.933053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.933246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.933274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.933493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.933521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.933680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.933708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.933945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-07-15 20:40:13.933971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.492 qpair failed and we were unable to recover it. 00:34:35.492 [2024-07-15 20:40:13.934186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.934214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.934406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.934434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.934612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.934641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.934809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.934835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.935017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.935043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.935222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.935251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.935436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.935465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.935654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.935679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.935870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.935913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.936102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.936131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.936330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.936358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.936579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.936604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.936764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.936793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.936985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.937015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.937201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.937229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.937430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.937456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.937625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.937652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.937811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.937840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.938044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.938070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.938244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.938269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.938459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.938488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.938712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.938741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.938912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.938940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.939133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.939159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.939377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.939406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.939631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.939660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.939871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.939907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.940095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.940120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.940292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.940320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.940510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.940539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.940753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.940781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.940973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.940999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.941153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.941179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.941329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.941355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.941538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.941571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.941756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.941782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.941937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.941966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.942155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.942184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.942394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.493 [2024-07-15 20:40:13.942422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.493 qpair failed and we were unable to recover it. 00:34:35.493 [2024-07-15 20:40:13.942593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.942618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.942803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.942828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.943020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.943050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.943233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.943261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.943451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.943478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.943703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.943731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.943919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.943948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.944132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.944160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.944333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.944359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.944550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.944578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.944792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.944821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.944983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.945009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.945180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.945206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.945552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.945610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.945826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.945854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.946079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.946104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.946299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.946324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.946471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.946496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.946667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.946710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.946898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.946939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.947131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.947156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.947357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.947386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.947613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.947642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.947802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.947832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.948041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.948067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.948260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.948288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.948515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.948540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.948749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.948775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.948963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.948989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.949140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.949165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.949341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.949367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.949566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.949594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.949808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.949833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.950010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.950039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.950227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.950255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.950442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.950476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.950647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.494 [2024-07-15 20:40:13.950672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.494 qpair failed and we were unable to recover it. 00:34:35.494 [2024-07-15 20:40:13.950869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.950900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.951134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.951163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.951314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.951342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.951538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.951563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.951708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.951750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.951967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.951996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.952212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.952240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.952435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.952460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.952649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.952678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.952872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.952909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.953077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.953105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.953297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.953322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.953517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.953545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.953734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.953763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.953949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.953978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.954207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.954232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.954431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.954459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.954648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.954677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.954866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.954901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.955093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.955119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.955312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.955340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.955543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.955569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.955714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.955756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.955940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.955967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.956145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.956173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.956340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.956369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.956582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.956611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.956801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.956826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.957043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.957072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.957306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.957335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.957524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.957549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.957712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.957740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.957944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.957970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.958186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.958214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.958379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.958407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.958601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.958627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.958811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.958839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.959035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.959064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.959251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.959283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.495 qpair failed and we were unable to recover it. 00:34:35.495 [2024-07-15 20:40:13.959481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.495 [2024-07-15 20:40:13.959506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.959683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.959712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.959982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.960011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.960175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.960203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.960400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.960425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.960617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.960645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.960868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.960901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.961077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.961102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.961320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.961346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.961513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.961543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.961704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.961733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.961925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.961954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.962146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.962172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.962348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.962377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.962547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.962576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.962729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.962757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.962914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.962940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.963155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.963183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.963371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.963399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.963583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.963611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-15 20:40:13.963835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-15 20:40:13.963861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.964069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.964097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.964285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.964313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.964476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.964505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.964670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.964695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.964854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.964891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.965085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.965114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.965328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.965357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.965579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.965605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.965824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.965852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.966046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.966076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.966235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.966263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.966432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.966457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.966659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.966687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.966882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.966912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.967078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.967106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.967333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.967358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.967527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.967555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.967710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.967739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.967926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.967959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.968175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.968200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.968435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.968463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.968683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.968712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.968898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.968941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.969118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.969143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.969309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.969334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.969524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.969553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.969719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.969747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.969946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.969972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.970139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.970165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.970424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.970473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.970686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.970714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.970943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.970969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.971147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.971175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.971361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.971390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.971574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.971602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.971826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.971851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.972052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.972081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.972275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.972303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.972512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.972540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-15 20:40:13.972710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-15 20:40:13.972736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.972874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.972905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.973093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.973121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.973311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.973336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.973534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.973559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.973751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.973780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.973952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.973981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.974175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.974203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.974428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.974453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.974654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.974683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.974873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.974908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.975060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.975088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.975294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.975319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.975543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.975569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.975737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.975763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.975959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.975988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.976184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.976209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.976404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.976432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.976599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.976628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.976786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.976819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.977009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.977035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.977218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.977246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.977466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.977495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.977683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.977711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.977931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.977957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.978153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.978181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.978348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.978376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.978586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.978615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.978805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.978831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.978992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.979023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.979190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.979219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.979396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.979424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.979594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.979620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.979770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.979798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.980022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.980051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.980217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.980245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.980458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.980483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.980645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.980674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.980862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.980899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.981079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.981108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-15 20:40:13.981298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-15 20:40:13.981324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.981488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.981518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.981702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.981731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.981918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.981947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.982137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.982163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.982346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.982374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.982608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.982633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.982835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.982864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.983054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.983080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.983274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.983302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.983496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.983522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.983740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.983768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.983960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.983986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.984154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.984182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.984371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.984399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.984590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.984619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.984786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.984812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.984967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.985005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.985227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.985256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.985445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.985478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.985649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.985675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.985824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.985850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.986005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.986030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.986219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.986247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.986437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.986462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.986620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.986649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.986818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.986846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.987048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.987073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.987245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.987270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.987464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.987490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.987695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.987724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.987938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.987966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.988160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.988185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.988379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.988407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.988561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.988590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.988809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.988837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.989010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.989035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.989212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.989237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.989470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.989498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.989690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.989718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.989915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-15 20:40:13.989941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-15 20:40:13.990161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.990189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.990378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.990407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.990622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.990650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.990820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.990846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.991045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.991074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.991270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.991299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.991453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.991481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.991701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.991726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.991922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.991950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.992141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.992169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.992358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.992386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.992611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.992636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.992833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.992861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.993035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.993063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.993255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.993283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.993476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.993501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.993722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.993750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.993934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.993963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.994124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.994157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.994357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.994382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.994529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.994555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.994744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.994773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.994937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.994966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.995134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.995161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.995350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.995378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.995566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.995595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.995756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.995784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.995960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.995986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.996175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.996200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.996395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.996424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.996575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.996604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.996860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-15 20:40:13.996896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-15 20:40:13.997121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:13.997163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:13.997320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:13.997348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:13.997562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:13.997590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:13.997815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:13.997840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:13.998070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:13.998099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:13.998337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:13.998365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:13.998594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:13.998619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:13.998773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:13.998799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:13.998994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:13.999023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:13.999243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:13.999272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:13.999485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:13.999513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:13.999734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:13.999759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:13.999982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.000011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.000172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.000201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.000397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.000425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.000626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.000652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.000845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.000873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.001078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.001107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.001293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.001321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.001508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.001533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.001694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.001722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.001913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.001942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.002125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.002154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.002337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.002363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.002517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.002546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.002762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.002790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.003004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.003037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.003218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.003244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.003425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.003451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.003687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.003712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.003901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.003927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.004072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.004099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.004318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.004347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.004542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.004571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.004755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.004784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.004959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.004985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.005133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.005175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.005367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.005397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-15 20:40:14.005596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-15 20:40:14.005625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.005820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.005847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.006059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.006089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.006321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.006347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.006546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.006575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.006795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.006822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.006995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.007026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.007216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.007246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.007477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.007503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.007680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.007707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.007914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.007942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.008475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.008506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.008741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.008767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.008941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.008968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.009121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.009147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.009331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.009360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.009555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.009583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.009773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.009799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.009952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.009979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.010130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.010156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.010359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.010388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.010609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.010635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.010830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.010858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.011058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.011087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.011247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.011275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.011468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.011494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.011717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.011746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.011951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.011978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.012130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.012156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.012373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.012398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.012590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.012620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.012813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.012842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.013038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.013064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.013213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.013239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.013416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.013442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.013610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.013636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.013803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.013829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.013981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.014007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.014175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.014204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.014392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.014421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.014638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-15 20:40:14.014666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-15 20:40:14.014860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.014892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.015069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.015099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.015296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.015324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.015512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.015541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.015733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.015758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.015928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.015958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.016145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.016174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.016334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.016363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.016557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.016584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.016746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.016775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.016979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.017008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.017174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.017202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.017393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.017419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.017610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.017639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.017869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.017910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.018101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.018130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.018325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.018351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.018518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.018547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.018738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.018768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.018965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.018992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.019159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.019185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.019338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.019366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.019557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.019586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.019804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.019833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.020003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.020029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.020220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.020249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.020441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.020469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.020659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.020689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.020853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.020885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.021060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.021086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.021270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.021296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.021470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.021497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.021669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.021696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.021852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.021888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.022052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.022081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.022277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.022303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.022475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.022501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.022697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.022726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.022893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-15 20:40:14.022923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-15 20:40:14.023085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.023113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.023307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.023332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.023490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.023519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.023678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.023706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.023905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.023935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.024104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.024130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.024321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.024350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.024578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.024607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.024760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.024789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.024991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.025017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.025171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.025196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.025373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.025399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.025565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.025591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.025758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.025783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.025979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.026008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.026201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.026233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.026396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.026424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.026621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.026646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.026820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.026849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.027029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.027058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.027288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.027314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.027459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.027485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.027655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.027681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.027826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.027852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.028016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.028042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.028190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.028217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.028439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.028468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.028654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.028683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.028864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.028901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.029060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.029087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.029254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.029283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.029509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.029538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.029728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.029756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.029928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.029955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.030155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.030180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.030392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.030417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.030633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.030662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.030858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.030891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-15 20:40:14.031100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-15 20:40:14.031128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.031319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.031348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.031540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.031569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.031758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.031784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.031955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.031985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.032152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.032181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.032345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.032374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.032566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.032592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.032788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.032817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.032969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.033000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.033166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.033192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.033364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.033389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.033584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.033610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.033779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.033804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.033975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.034001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.034178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.034204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.034353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.034379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.034545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.034575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.034751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.034776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.034964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.034991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.035160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.035185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.035354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.035381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.035541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.035567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.035717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.035743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.035913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.035940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.036112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.036138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.036310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.036336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.036508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.036534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.036709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.036734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.036900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.036927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.037092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.037117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.037327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.037353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.037549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.037574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-15 20:40:14.037738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-15 20:40:14.037766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.038005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.038034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.038253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.038279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.038472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.038498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.038668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.038693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.038868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.038902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.039078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.039103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.039281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.039306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.039445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.039471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.039617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.039642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.039817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.039843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.040043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.040069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.040216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.040242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.040412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.040439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.040638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.040665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.040835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.040862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.041016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.041042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.041213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.041239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.041442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.041468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.041636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.041662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.041818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.041847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.042025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.042051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.042205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.042231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.042403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.042428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.042596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.042625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.042837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.042866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.043068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.043110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.043308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.043355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.043562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.043590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.043803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.043831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.044040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.044070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.044268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.044315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.044494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.044522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.044703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.044732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.044972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.045002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.045211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.045244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.045474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.045503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-15 20:40:14.045707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-15 20:40:14.045735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.045957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.045986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.046213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.046241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.046463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.046488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.046663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.046688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.046839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.046865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.047019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.047046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.047213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.047239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.047412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.047437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.047639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.047665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.047830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.047856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.048011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.048037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.048233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.048259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.048436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.048462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.048641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.048666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.048835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.048860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.049018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.049045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.049223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.049250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.049415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.049440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.049603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.049629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.049849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.049887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.050082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.050109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.050295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.050321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.050492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.050518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.050703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.050732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.050900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.050927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.051124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.051150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.051299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.051329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.051514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.051539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.051711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.051737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.051904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.051930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.052104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.052129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.052278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.052304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.052455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.052480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.052684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.052710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.052887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.052913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.053081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.053107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.053259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.053285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-15 20:40:14.053452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-15 20:40:14.053477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.053677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.053702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.053901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.053927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.054110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.054136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.054310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.054335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.054478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.054503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.054673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.054699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.054900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.054926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.055095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.055121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.055287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.055312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.055507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.055533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.055679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.055706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.055904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.055947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.056119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.056144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.056320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.056345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.056517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.056543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.056712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.056737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.056933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.056960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.057128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.057153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.057356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.057382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.057557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.057583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.057778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.057803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.057978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.058004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.058181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.058207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.058350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.058376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.058576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.058602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.058802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.058827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.058995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.059022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.059163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.059188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.059319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.059350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.059555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.059581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.059731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.059758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.059966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.059992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.060132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.060158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.060353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.060378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.060520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.060545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.060716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.060742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.060939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.060965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.061111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.061137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.061284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.061311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-15 20:40:14.061460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-15 20:40:14.061486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.061683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.061709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.061875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.061907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.062059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.062085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.062224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.062250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.062414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.062439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.062579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.062604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.062780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.062806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.062977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.063002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.063143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.063169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.063338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.063363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.063560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.063586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.063748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.063774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.063947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.063973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.064146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.064172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.064349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.064376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.064579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.064605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.064797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.064822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.064978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.065004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.065180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.065206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.065374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.065400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.065574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.065600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.065772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.065797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.065978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.066005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.066174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.066200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.066346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.066371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.066520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.066546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.066716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.066745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.066935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.066961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.067110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.067141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.067283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.067309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.067453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.067479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-15 20:40:14.067613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-15 20:40:14.067639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.067809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.067834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.068017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.068043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.068216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.068242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.068440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.068466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.068633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.068659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.068866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.068899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.069067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.069093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.069267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.069292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.069489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.069515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.069695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.069720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.069899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.069926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.070097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.070123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.070264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.070290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.070498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.070524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.070693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.070719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.070920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.070946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.071089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.071116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.071311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.071336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.071506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.071532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.071706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.071732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.071873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.071905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.072055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.072080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.072258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.072284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.072454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.072480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.072654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.072680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.072857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.072900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.073075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.073100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.073297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.073323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.073505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.073530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.073692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.073717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.073884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.073910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.074049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.074075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.074244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.074270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.074441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.074466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.074633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.074659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.074806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.074832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.075013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.075044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.075243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.075269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.075442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-15 20:40:14.075469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-15 20:40:14.075608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.075634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.075842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.075868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.076042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.076068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.076245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.076271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.076444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.076471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.076666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.076691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.076832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.076859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.077020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.077047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.077217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.077244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.077437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.077462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.077627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.077653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.077803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.077829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.078011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.078037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.078214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.078240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.078414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.078439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.078579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.078605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.078795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.078823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.079009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.079035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.079224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.079252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.079585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.079640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.079894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.079938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.080111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.080137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.080310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.080336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.080511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.080536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.080709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.080735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.080901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.080928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.081095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.081121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.081265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.081291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.081440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.081466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.081673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.081699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.081848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.081873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.082051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.082076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.082270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.082295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.082493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.082522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.082708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.082736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.082930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.082957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.083101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.083127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.083272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.083302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.083450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.083475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.083631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.792 [2024-07-15 20:40:14.083657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.792 qpair failed and we were unable to recover it. 00:34:35.792 [2024-07-15 20:40:14.083804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.083829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.084007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.084033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.084184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.084209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.084417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.084443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.084593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.084619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.084754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.084781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.084952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.084978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.085160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.085186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.085351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.085376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.085524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.085549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.085740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.085766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.085915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.085941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.086105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.086131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.086303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.086329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.086503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.086530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.086701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.086727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.086866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.086898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.087075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.087101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.087269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.087295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.087468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.087495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.087692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.087717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.087851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.087883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.088024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.088049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.088200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.088226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.088425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.088450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.088621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.088646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.088795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.088820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.089022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.089050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.089246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.089272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.089440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.089466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.089636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.089662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.089856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.089893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.090088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.090113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.090314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.090339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.090482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.090507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.090660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.090685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.090886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.090913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.091051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.091081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.091253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.091279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.091449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.793 [2024-07-15 20:40:14.091475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.793 qpair failed and we were unable to recover it. 00:34:35.793 [2024-07-15 20:40:14.091644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.091670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.091835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.091860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.092040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.092065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.092237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.092263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.092410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.092436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.092570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.092595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.092763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.092788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.092967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.092993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.093139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.093164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.093306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.093331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.093498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.093523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.093727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.093753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.093952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.093978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.094120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.094145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.094342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.094367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.094543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.094569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.094739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.094764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.094929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.094955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.095121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.095147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.095325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.095351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.095522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.095547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.095689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.095714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.095890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.095916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.096094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.096120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.096323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.096349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.096548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.096573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.096739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.096765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.096976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.097004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.097175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.097201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.097372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.097398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.097593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.097619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.097829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.097855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.098015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.098042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.098212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.098238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.098408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.098433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.098632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.098658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.098800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.098827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.099028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.099061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.099258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.099283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.099456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.099482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.099624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.794 [2024-07-15 20:40:14.099651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.794 qpair failed and we were unable to recover it. 00:34:35.794 [2024-07-15 20:40:14.099844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.099872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.100073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.100099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.100274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.100300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.100448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.100474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.100627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.100652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.100846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.100882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.101071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.101097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.101272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.101298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.101491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.101519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.101736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.101764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.101989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.102016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.102162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.102188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.102358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.102383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.102580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.102605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.102806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.102832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.103006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.103032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.103202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.103227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.103369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.103394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.103572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.103597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.103763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.103788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.103981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.104008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.104213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.104238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.104415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.104442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.104595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.104621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.104786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.104813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.105010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.105036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.105210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.105236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.105427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.105453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.105650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.105675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.105842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.105870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.106086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.106112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.106288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.106314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.106469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.106494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.106689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.795 [2024-07-15 20:40:14.106715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.795 qpair failed and we were unable to recover it. 00:34:35.795 [2024-07-15 20:40:14.106947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.106974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.107147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.107173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.107351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.107381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.107517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.107542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.107717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.107743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.107940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.107966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.108147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.108172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.108343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.108368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.108569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.108595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.108769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.108794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.108968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.108995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.109168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.109193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.109366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.109392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.109557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.109582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.109724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.109751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.109939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.109965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.110114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.110140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.110345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.110371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.110571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.110596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.110759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.110785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.110949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.110975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.111118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.111144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.111288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.111313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.111485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.111510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.111682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.111708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.111848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.111874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.112034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.112059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.112257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.112282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.112459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.112484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.112660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.112685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.112872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.112922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.113075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.113101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.113250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.113277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.113477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.113503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.113671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.113697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.113870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.113900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.114054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.114080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.114272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.114297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.114468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.114495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.114665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.114691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.796 qpair failed and we were unable to recover it. 00:34:35.796 [2024-07-15 20:40:14.114866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.796 [2024-07-15 20:40:14.114905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.115080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.115105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.115276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.115301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.115455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.115482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.115633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.115659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.115855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.115889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.116064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.116089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.116262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.116288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.116436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.116462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.116637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.116662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.116828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.116853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.117029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.117054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.117225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.117251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.117423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.117448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.117618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.117645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.117822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.117848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.118056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.118082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.118234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.118259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.118402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.118428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.118636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.118661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.118798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.118824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.118998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.119024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.119173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.119199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.119372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.119398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.119550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.119575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.119760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.119788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.119953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.119980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.120155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.120182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.120386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.120412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.120582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.120612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.120790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.120815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.120987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.121013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.121191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.121218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.121417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.121443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.121587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.121613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.121792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.121818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.121984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.122011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.122182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.122207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.122369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.122394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.122534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.122560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.122725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.797 [2024-07-15 20:40:14.122751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.797 qpair failed and we were unable to recover it. 00:34:35.797 [2024-07-15 20:40:14.122928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.122955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.123110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.123136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.123288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.123314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.123515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.123541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.123715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.123740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.123903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.123928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.124096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.124122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.124273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.124299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.124467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.124492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.124668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.124693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.124839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.124865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.125036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.125062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.125239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.125264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.125436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.125462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.125604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.125630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.125845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.125874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.126074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.126100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.126272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.126298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.126464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.126491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.126666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.126692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.126905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.126935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.127098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.127123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.127297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.127322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.127496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.127522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.127693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.127719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.127895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.127922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.128094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.128119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.128271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.128297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.128463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.128493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.128691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.128717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.128922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.128957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.129113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.129140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.129313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.129338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.129504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.129531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.129703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.129730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.129909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.129936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.130110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.130141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.130340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.130366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.130534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.130560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.130734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.130760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.798 [2024-07-15 20:40:14.130932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.798 [2024-07-15 20:40:14.130960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.798 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.131135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.131162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.131363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.131398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.131593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.131630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.131824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.131853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.132051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.132088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.132266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.132293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.132499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.132525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.132711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.132739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.132930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.132959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.133136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.133162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.133335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.133362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.133526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.133552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.133736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.133763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.133947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.133973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.134153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.134180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.134378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.134404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.134556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.134583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.134757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.134783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.134965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.134992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.135193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.135220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.135392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.135418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.135556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.135583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.135784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.135810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.135989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.136016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.136184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.136210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.136378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.136403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.136584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.136612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.136811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.136845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.137045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.137071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.137219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.137246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.137455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.137482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.137620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.137645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.137802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.137832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.138076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.138102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.799 qpair failed and we were unable to recover it. 00:34:35.799 [2024-07-15 20:40:14.138276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.799 [2024-07-15 20:40:14.138302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.138502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.138529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.138672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.138699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.138902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.138945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.139149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.139176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.139346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.139371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.139518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.139544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.139747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.139773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.139931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.139958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.140132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.140158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.140329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.140356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.140528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.140555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.140735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.140762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.140944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.140971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.141107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.141134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.141282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.141309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.141477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.141503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.141681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.141707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.141912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.141939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.142114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.142141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.142293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.142319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.142490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.142517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.142660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.142685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.142834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.142860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.143047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.143073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.143244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.143271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.143422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.143448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.143619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.143644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.143819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.143845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.144024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.144052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.144235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.144261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.144423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.144448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.144647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.144672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.144865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.144907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.145254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.145281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.145453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.145479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.145655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.145681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.145850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.145890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.146097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.146124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.146292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.146319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.146464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.800 [2024-07-15 20:40:14.146490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.800 qpair failed and we were unable to recover it. 00:34:35.800 [2024-07-15 20:40:14.146668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.146695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.146835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.146861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.147053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.147080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.147245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.147271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.147442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.147467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.147654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.147680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.147861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.147896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.148098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.148125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.148340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.148366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.148540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.148566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.148746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.148782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.148937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.148964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.149136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.149163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.149314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.149339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.149507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.149534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.149767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.149797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.149986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.150012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.150160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.150187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.150359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.150385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.150546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.150573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.150773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.150806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.151023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.151050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.151199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.151236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.151384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.151410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.151560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.151591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.151758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.151784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.151953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.151979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.152183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.152208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.152375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.152402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.152541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.152568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.152740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.152766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.152958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.152985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.153137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.153168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.153339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.153364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.153539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.153566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.153752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.153778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.153931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.153957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.154133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.154161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.154346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.154372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.154520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.154546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.801 [2024-07-15 20:40:14.154711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.801 [2024-07-15 20:40:14.154736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.801 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.154911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.154939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.155109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.155135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.155304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.155330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.155526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.155552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.155752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.155777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.155927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.155954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.156131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.156158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.156322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.156347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.156519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.156545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.156727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.156754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.156930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.156957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.157127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.157152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.157323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.157360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.157537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.157562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.157739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.157765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.157934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.157971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.158124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.158150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.158316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.158342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.158490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.158517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.158686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.158729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.158926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.158952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.159126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.159151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.159325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.159351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.159520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.159546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.159713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.159738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.159888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.159914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.160086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.160124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.160303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.160329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.160500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.160526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.160698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.160724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.160893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.160920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.161067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.161097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.161276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.161302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.161472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.161500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.161665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.161692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.161855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.161887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.162087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.162112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.162266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.162292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.162433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.162459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.162634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.162661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.802 [2024-07-15 20:40:14.162836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.802 [2024-07-15 20:40:14.162863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.802 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.163043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.163069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.163243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.163269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.163417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.163444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.163639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.163664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.163832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.163860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.164044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.164070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.164239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.164264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.164434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.164459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.164632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.164659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.164837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.164865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.165072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.165097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.165265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.165300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.165454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.165480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.165653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.165679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.165922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.165949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.166124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.166149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.166302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.166329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.166519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.166550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.166734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.166760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.166931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.166957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.167155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.167182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.167384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.167417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.167630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.167656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.167855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.167887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.168085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.168110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.168256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.168283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.168474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.168510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.168681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.168706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.168905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.168932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.169134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.169164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.169308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.169339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.169488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.169514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.169686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.169716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.169888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.169916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.170114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.803 [2024-07-15 20:40:14.170140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.803 qpair failed and we were unable to recover it. 00:34:35.803 [2024-07-15 20:40:14.170345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.170372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.170589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.170615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.170762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.170787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.170962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.170990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.171136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.171163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.171361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.171387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.171530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.171556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.171732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.171759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.171915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.171941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.172115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.172142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.172344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.172370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.172540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.172565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.172762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.172787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.172970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.172997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.173150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.173177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.173351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.173376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.173526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.173553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.173721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.173747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.173922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.173948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.174124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.174153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.174331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.174356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.174554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.174581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.174757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.174783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.174928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.174955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.175133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.175159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.175331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.175356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.175504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.175531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.175705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.175731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.175908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.175935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.176110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.176137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.176318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.176345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.176541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.176567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.176736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.176762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.176916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.176943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.177124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.177151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.177324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.177354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.177505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.177532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.177679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.177704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.177905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.177932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.178072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.178098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.804 qpair failed and we were unable to recover it. 00:34:35.804 [2024-07-15 20:40:14.178250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.804 [2024-07-15 20:40:14.178282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.178471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.178498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.178703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.178730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.178883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.178909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.179050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.179076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.179223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.179248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.179418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.179444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.179614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.179641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.179841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.179868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.180034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.180060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.180229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.180256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.180428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.180455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.180648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.180673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.180844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.180869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.181025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.181050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.181225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.181251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.181427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.181454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.181629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.181655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.181856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.181890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.182087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.182114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.182282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.182308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.182456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.182482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.182668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.182696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.182846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.182873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.183066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.183102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.183249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.183274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.183418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.183443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.183642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.183667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.183846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.183873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.184024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.184049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.184230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.184256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.184400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.184428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.184623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.184649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.184818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.184843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.185023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.185050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.185246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.185276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.185416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.185442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.185610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.185635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.185854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.185895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.186063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.186089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.186286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.805 [2024-07-15 20:40:14.186311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.805 qpair failed and we were unable to recover it. 00:34:35.805 [2024-07-15 20:40:14.186483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.186508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.186683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.186710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.186910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.186937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.187108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.187133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.187297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.187322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.187497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.187524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.187696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.187721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.187888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.187915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.188094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.188120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.188264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.188290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.188465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.188500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.188655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.188681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.188853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.188885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.189061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.189087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.189261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.189286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.189475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.189501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.189676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.189701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.189901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.189928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.190078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.190103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.190245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.190271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.190464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.190489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.190669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.190695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.190836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.190861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.191047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.191074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.191243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.191268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.191470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.191497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.191704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.191730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.191903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.191929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.192096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.192121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.192307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.192333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.192509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.192535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.192715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.192741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.192923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.192958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.193128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.193155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.193303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.193333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.193504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.193530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.193731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.193757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.193935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.193962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.194130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.194155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.194343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.194370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.194543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.806 [2024-07-15 20:40:14.194569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.806 qpair failed and we were unable to recover it. 00:34:35.806 [2024-07-15 20:40:14.194738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.194764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.194942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.194968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.195151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.195177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.195345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.195376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.195590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.195617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.195787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.195812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.195982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.196008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.196190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.196216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.196414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.196440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.196637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.196664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.196814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.196839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.196994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.197021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.197200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.197226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.197398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.197423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.197565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.197593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.197788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.197815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.197988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.198015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.198159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.198185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.198355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.198381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.198526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.198552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.198761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.198788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.198984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.199010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.199185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.199213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.199385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.199413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.199647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.199678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.199867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.199902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.200071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.200097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.200265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.200291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.200467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.200493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.200686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.200712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.200884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.200911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.201089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.201116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.201321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.201347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.201515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.201545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.201726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.201751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.201905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.201931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.202100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.202125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.202262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.202289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.202458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.202483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.202655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.202680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.202884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.807 [2024-07-15 20:40:14.202913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.807 qpair failed and we were unable to recover it. 00:34:35.807 [2024-07-15 20:40:14.203098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.203124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.203295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.203320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.203512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.203537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.203731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.203757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.203948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.203975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.204160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.204185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.204363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.204389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.204560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.204586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.204780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.204806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.204980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.205007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.205154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.205180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.205356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.205382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.205602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.205630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.205840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.205868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.206082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.206107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.206285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.206310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.206497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.206522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.206728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.206758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.206973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.206999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.207175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.207201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.207374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.207399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.207551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.207577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.207772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.207798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.208000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.208027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.208223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.208248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.208439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.208466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.208638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.208663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.208833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.208859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.209053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.209079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.209282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.209308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.209456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.209481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.209651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.209676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.209843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.209891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.210093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.210118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.210295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.808 [2024-07-15 20:40:14.210320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.808 qpair failed and we were unable to recover it. 00:34:35.808 [2024-07-15 20:40:14.210525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.210551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.210695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.210720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.210874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.210906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.211082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.211107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.211261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.211288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.211489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.211514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.211688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.211713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.211921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.211948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.212148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.212175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.212372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.212398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.212599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.212624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.212770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.212797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.212998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.213024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.213200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.213227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.213398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.213424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.213618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.213643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.213866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.213917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.214090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.214115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.214288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.214313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.214464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.214489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.214659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.214684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.214887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.214914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.215075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.215101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.215286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.215312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.215518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.215543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.215713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.215740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.215913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.215939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.216139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.216164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.216334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.216360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.216536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.216564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.216764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.216790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.216937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.216964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.217159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.217184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.217382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.217407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.217581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.217608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.217781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.217807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.218037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.218064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.218230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.218255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.218423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.218448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.218624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.218649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.218817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.809 [2024-07-15 20:40:14.218846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.809 qpair failed and we were unable to recover it. 00:34:35.809 [2024-07-15 20:40:14.219049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.219076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.219254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.219280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.219451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.219476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.219648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.219675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.219875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.219927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.220076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.220112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.220297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.220323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.220518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.220544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.220713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.220739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.220941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.220967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.221127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.221154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.221349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.221375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.221513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.221539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.221678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.221703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.221843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.221869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.222047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.222073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.222270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.222295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.222469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.222494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.222669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.222694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.222888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.222932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.223129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.223154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.223324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.223351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.223554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.223579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.223746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.223776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.223972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.223999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.224166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.224191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.224365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.224391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.224566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.224592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.224732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.224758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.224933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.224959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.225131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.225156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.225327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.225353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.225524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.225549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.225686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.225711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.225888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.225914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.226083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.226108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.226257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.226282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.226467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.226493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.226660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.226686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.226823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.226849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.810 qpair failed and we were unable to recover it. 00:34:35.810 [2024-07-15 20:40:14.227002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.810 [2024-07-15 20:40:14.227028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.227176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.227201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.227347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.227373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.227571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.227596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.227793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.227818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.227960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.227987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.228198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.228224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.228424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.228450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.228616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.228642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.228778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.228804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.229010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.229036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.229188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.229214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.229384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.229408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.229573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.229598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.229763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.229791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.229988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.230014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.230186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.230211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.230350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.230375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.230574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.230599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.230793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.230821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.231018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.231044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.231216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.231242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.231413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.231438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.231612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.231641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.231867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.231925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.232101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.232127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.232333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.232358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.232533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.232558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.232754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.232780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.232983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.233009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.233152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.233179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.233374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.233400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.233546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.233571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.233743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.233769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.233923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.233950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.234125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.234150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.234349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.234374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.234544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.234569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.234709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.234735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.234913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.234939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.235089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.235114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.235312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.235337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.235502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.235527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.811 [2024-07-15 20:40:14.235699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.811 [2024-07-15 20:40:14.235724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.811 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.235923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.235949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.236118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.236145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.236292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.236317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.236488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.236513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.236712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.236737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.236888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.236914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.237115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.237141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.237281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.237307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.237449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.237474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.237621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.237648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.237823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.237848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.238057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.238083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.238282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.238307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.238477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.238504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.238675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.238700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.238882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.238908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.239081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.239107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.239249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.239275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.239474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.239499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.239677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.239706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.239849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.239874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.240094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.240119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.240263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.240290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.240462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.240487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.240711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.240739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.240931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.240958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.241110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.241136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.241286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.241312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.241506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.241532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.241701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.241726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.241896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.241922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.242123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.242149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.242296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.242321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.242492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.242518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.242667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.242692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.242867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.242900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.243097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.243123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.812 [2024-07-15 20:40:14.243291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.812 [2024-07-15 20:40:14.243317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.812 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.243489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.243514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.243691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.243717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.243855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.243898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.244041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.244068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.244209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.244235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.244410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.244436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.244611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.244636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.244786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.244811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.244988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.245014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.245182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.245208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.245356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.245382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.245576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.245602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.245795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.245822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.246021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.246048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.246243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.246269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.246416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.246442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.246589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.246615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.246784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.246809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.246975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.247001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.247175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.247200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.247389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.247414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.247585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.247614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.247803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.247831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.248019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.248044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.248217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.248242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.248445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.248471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.248666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.248691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.248828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.248854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.249012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.249038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.249240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.249268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.249488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.249516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.249728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.249757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.249932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.249959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.250135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.250161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.250309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.250335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.250535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.250560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.250703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.250728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.250910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.250936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.251105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.251131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.251281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.251308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.251480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.251505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.251705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.251731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.251902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.251928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.813 [2024-07-15 20:40:14.252104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.813 [2024-07-15 20:40:14.252129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.813 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.252300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.252326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.252499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.252525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.252727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.252753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.252905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.252931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.253127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.253152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.253326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.253351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.253492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.253517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.253689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.253714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.253887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.253913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.254056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.254082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.254223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.254249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.254446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.254472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.254636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.254662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.254863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.254896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.255068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.255093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.255258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.255283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.255433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.255459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.255641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.255671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.255841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.255867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.256051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.256077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.256215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.256240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.256407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.256432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.256633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.256659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.256833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.256858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.257005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.257031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.257231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.257257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.257401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.257426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.257624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.257649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.257828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.257853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.258010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.258037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.258215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.258241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.258440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.258466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.258606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.258632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.258807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.258833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.259014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.259040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.259191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.259217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.259382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.259407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.259585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.259611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.259819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.259845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.260033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.260058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.260225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.260250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.260535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.260591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.260802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.814 [2024-07-15 20:40:14.260830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.814 qpair failed and we were unable to recover it. 00:34:35.814 [2024-07-15 20:40:14.261059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.261102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.261347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.261376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.261595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.261623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.261813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.261841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.262088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.262116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.262323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.262351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.262635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.262663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.262875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.262924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.263072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.263099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.263245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.263271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.263468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.263493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.263694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.263719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.263868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.263911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.264086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.264112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.264282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.264312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.264483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.264508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.264678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.264704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.264884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.264910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.265057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.265084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.265263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.265289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.265438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.265464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.265640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.265666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.265844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.265869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.266051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.266077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.266275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.266301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.266499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.266524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.266675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.266701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.266868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.266901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.267085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.267110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.267278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.267304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.267475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.267501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.267699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.267724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.267902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.267928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.268079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.268105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.268302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.268328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.268496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.268522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.268681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.268709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.268883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.268909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.269085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.269111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.269306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.269331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.269525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.269551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.269722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.269748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.269927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.269953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.270127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.270153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.815 qpair failed and we were unable to recover it. 00:34:35.815 [2024-07-15 20:40:14.270291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.815 [2024-07-15 20:40:14.270317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.270493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.270518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.270656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.270682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.270851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.270883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.271083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.271109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.271253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.271279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.271430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.271456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.271659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.271684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.271891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.271921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.272111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.272151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.272358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.272391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.272541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.272567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.272709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.272734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.272885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.272911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.273124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.273150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.273287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.273312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.273451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.273476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.273644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.273669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.273819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.273844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.274021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.274047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.274249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.274274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.274447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.274474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.274653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.274679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.274842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.274867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.275048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.275073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.275243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.275267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.275437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.275463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.275661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.275686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.275862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.275892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.276040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.276064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.276271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.276296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.276469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.276493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.816 [2024-07-15 20:40:14.276632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.816 [2024-07-15 20:40:14.276657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.816 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.276929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.276955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.277094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.277120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.277265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.277290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.277493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.277517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.277711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.277739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.277922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.277949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.278115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.278141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.278301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.278326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.278523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.278548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.278712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.278737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.278881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.278906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.279078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.279103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.279274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.279299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.279475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.279500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.279662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.279687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.279887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.279930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.280083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.280110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.280289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.280315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.280493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.280518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.280688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.280713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.280917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.280943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.281090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.281115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.281290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.281314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.281510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.281535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.281705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.281729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.281884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.281910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.282058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.282084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.282222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.282247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.282419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.282444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.282618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.282644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.282791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.282816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.282996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.283022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.283173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.283200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:35.817 qpair failed and we were unable to recover it. 00:34:35.817 [2024-07-15 20:40:14.283348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.817 [2024-07-15 20:40:14.283374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.283540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.283565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.283734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.283759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.283905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.283931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.284089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.284124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.284293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.284324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.284480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.284510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.284658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.284683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.284829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.284854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.285015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b035b0 is same with the state(5) to be set 00:34:36.097 [2024-07-15 20:40:14.285200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.285239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.285396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.285423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.285584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.285613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.285813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.285842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.286027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.286054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.286217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.286244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.286417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.286442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.286640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.286668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.286907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.286950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.287118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.287144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.287295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.287320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.287473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.287500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.287675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.287700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.287849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.287882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.288065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.288091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.288232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.288258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.288412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.288438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.288640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.288666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.288838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.288864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.289014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.289041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.289251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.289278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.289422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.289448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.289621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.289647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.289797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.289824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.290043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.290082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.290274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.290300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.290479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.290505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.290837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.290904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.291109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.291134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.291335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.291366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.291515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.291540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.291702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.291727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-15 20:40:14.291898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-15 20:40:14.291925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.292122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.292147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.292316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.292341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.292515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.292541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.292674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.292699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.292842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.292867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.293055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.293081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.293281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.293306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.293502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.293527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.293694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.293719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.293955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.293981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.294182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.294207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.294373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.294398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.294576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.294601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.294769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.294794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.294957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.294983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.295153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.295178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.295355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.295380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.295559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.295586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.295762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.295787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.295961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.295987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.296140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.296166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.296351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.296376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.296524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.296549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.296693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.296720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.296860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.296892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.297058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.297082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.297278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.297304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.297488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.297513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.297713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.297738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.297915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.297941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.298115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.298140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.298282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.298322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.298488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.298513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.298745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.298769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.298962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.298987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.299160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.299186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.299355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.299379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.299556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.299582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.299759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.299784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-15 20:40:14.299977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-15 20:40:14.300003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.300174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.300199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.300393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.300418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.300589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.300614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.300745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.300770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.300941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.300967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.301136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.301160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.301300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.301325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.301522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.301547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.301715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.301740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.301939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.301964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.302137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.302162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.302342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.302367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.302542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.302567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.302738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.302763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.302935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.302960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.303144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.303169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.303346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.303372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.303567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.303591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.303734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.303759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.303915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.303942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.304139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.304164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.304334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.304359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.304532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.304557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.304753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.304781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.304945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.304975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.305172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.305197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.305391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.305416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.305576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.305601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.305748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.305773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.305942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.305967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.306137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.306162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.306335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.306360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.306559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.306584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.306780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.306805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.306958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.306984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.307161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.307186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.307351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.307375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.307581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.307606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.307756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.307782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.307959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.307984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-15 20:40:14.308237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-15 20:40:14.308262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.308432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.308457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.308606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.308632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.308772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.308798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.309003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.309029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.309275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.309300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.309477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.309502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.309639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.309664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.309867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.309902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.310053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.310078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.310271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.310296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.310468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.310496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.310673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.310698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.310845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.310870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.311042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.311067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.311246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.311271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.311418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.311442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.311607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.311632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.311813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.311839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.312016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.312041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.312188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.312213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.312404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.312430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.312592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.312617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.312790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.312815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.312990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.313016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.313269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.313294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.313463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.313488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.313753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.313805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.313976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.314002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.314168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.314193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.314396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.314421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.314622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.314647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.314789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.314829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.315028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.315054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.315247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.315274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.315464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.315492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.315737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.315765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.315982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.316008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.316181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.316212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.316411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.316436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.316588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.316613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.316786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-15 20:40:14.316812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-15 20:40:14.316983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.317009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.317205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.317230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.317367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.317392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.317564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.317589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.317759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.317783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.317956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.317983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.318163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.318188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.318337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.318361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.318558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.318583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.318775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.318804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.319001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.319026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.319177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.319201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.319374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.319400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.319577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.319602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.319791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.319819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.319991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.320016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.320192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.320217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.320353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.320377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.320550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.320575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.320791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.320819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.321045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.321070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.321222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.321247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.321425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.321450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.321595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.321620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.321793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.321819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.322022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.322053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.322292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.322320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.322613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.322642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.322850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.322884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.323107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.323135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.323375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.323403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-15 20:40:14.323595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-15 20:40:14.323623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.323832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.323859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.324081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.324110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.324350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.324378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.324568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.324595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.324831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.324859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.325163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.325192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.325401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.325429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.325782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.325831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.326120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.326148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.326387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.326414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.326696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.326725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.326927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.326956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.327124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.327149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.327302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.327328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.327530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.327555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.327757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.327782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.327954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.327980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.328135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.328160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.328411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.328435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.328640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.328665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.328836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.328860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.329022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.329047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.329189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.329214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.329355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.329380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.329557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.329582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.329771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.329799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.330011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.330037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.330205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.330230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.330481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.330506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.330699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.330724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.330864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.330894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.331045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.331070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.331223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.331252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.331423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.331449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.331620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.331646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.331820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.331845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.332024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.332050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.332225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.332251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.332424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.332449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.332615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.332639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.332779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-15 20:40:14.332804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-15 20:40:14.333008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.333034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.333196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.333221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.333425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.333450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.333623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.333647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.333843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.333870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.334104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.334129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.334326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.334351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.334516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.334541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.334716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.334742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.334887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.334912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.335059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.335084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.335298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.335324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.335496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.335521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.335657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.335682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.335832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.335857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.336066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.336092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.336239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.336264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.336434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.336459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.336630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.336660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.336866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.336907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.337082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.337107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.337284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.337309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.337471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.337496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.337636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.337661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.337830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.337855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.338011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.338036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.338207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.338232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.338376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.338403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.338572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.338597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.338762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.338788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.338986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.339012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.339151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.339178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.339382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.339407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.339553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.339578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.339722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.339747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.339890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.339916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.340054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.340081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.340278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.340303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.340455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.340480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.340650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.340674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.340846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-15 20:40:14.340871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-15 20:40:14.341033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.341059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.341197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.341223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.341391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.341416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.341612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.341637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.341772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.341797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.341963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.341989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.342191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.342217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.342383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.342407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.342581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.342606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.342804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.342829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.342973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.342998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.343165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.343190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.343385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.343410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.343573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.343597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.343794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.343819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.343989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.344015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.344155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.344180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.344330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.344355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.344503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.344529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.344671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.344698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.344894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.344920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.345119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.345145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.345315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.345340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.345512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.345537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.345738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.345766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.345982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.346008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.346152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.346177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.346381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.346406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.346581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.346608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.346807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.346832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.347035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.347062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.347229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.347254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.347418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.347443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.347610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.347636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.347819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.347844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.347995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.348021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.348200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.348225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.348376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.348401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.348547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.348573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.348769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.348797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.348962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.348987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-15 20:40:14.349160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-15 20:40:14.349185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.349360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.349386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.349529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.349553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.349746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.349774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.349947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.349977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.350110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.350136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.350301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.350326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.350512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.350537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.350708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.350733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.350886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.350911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.351058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.351083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.351216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.351240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.351380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.351405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.351581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.351607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.351777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.351802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.351972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.351998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.352179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.352204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.352340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.352365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.352618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.352642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.352845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.352874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.353046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.353071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.353212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.353236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.353384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.353410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.353576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.353601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.353799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.353826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.354012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.354038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.354205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.354230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.354395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.354420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.354586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.354612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-15 20:40:14.354805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-15 20:40:14.354833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.355053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.355082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.355401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.355466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.355679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.355707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.355944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.355969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.356143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.356168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.356341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.356366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.356509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.356534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.356675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.356701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.356847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.356872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.357056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.357082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.357278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.357303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.357474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.357499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.357665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.357690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.357870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.357901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.358074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.358099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.358272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.358297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.358467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.358492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.358662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.358687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.358836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.358861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.359043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.359068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.359238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.359263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.359405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.359430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.359604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.359629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.359772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.359797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.359976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.360001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.360175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.360201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.360395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.360419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.360589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.360614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.360790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.360819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.360985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.361011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.361184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.361209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.361410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.361435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.361599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.361624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.361795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.361820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.361992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.362018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.362150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.362175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.362369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.362394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.362589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.362614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.362760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.362785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.362981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.363007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-15 20:40:14.363150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-15 20:40:14.363175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.363320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.363346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.363536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.363561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.363743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.363768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.363945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.363971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.364141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.364166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.364344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.364369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.364543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.364568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.364736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.364761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.364931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.364958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.365107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.365132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.365379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.365404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.365603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.365628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.365794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.365821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.366043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.366069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.366209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.366234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.366409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.366434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.366609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.366634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.366833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.366861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.367017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.367042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.367187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.367212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.367389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.367414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.367660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.367685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.367882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.367924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.368126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.368151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.368320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.368345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.368539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.368563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.368765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.368790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.368961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.368988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.369160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.369186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 16035 Killed "${NVMF_APP[@]}" "$@" 00:34:36.107 [2024-07-15 20:40:14.369359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.369385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.369560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.369585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:36.107 [2024-07-15 20:40:14.369748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.369774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:36.107 [2024-07-15 20:40:14.369950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.369976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:36.107 [2024-07-15 20:40:14.370121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:36.107 [2024-07-15 20:40:14.370147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.107 [2024-07-15 20:40:14.370286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.370311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.370482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.370507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.370674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.370698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.370865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.370895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-15 20:40:14.371071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-15 20:40:14.371096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.371268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.371297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.371462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.371487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.371663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.371688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.371859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.371889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.372059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.372083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.372235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.372261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.372429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.372453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.372602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.372628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.372804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.372829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.373022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.373048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.373245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.373270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.373420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.373446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.373640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.373665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.373811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.373835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=16592 00:34:36.108 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:36.108 [2024-07-15 20:40:14.374022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.374049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 16592 00:34:36.108 [2024-07-15 20:40:14.374192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.374217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 16592 ']' 00:34:36.108 [2024-07-15 20:40:14.374383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.374408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:36.108 [2024-07-15 20:40:14.374611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.374638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.108 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:36.108 [2024-07-15 20:40:14.374835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.374860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.108 [2024-07-15 20:40:14.375003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.375029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.375535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.375565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.375789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.375818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.376010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.376037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.376189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.376214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.376379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.376404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.376576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.376601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.376764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.376792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.376970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.376996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.377145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.377171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.377367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.377392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.377618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.377647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.377837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.377861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.378015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.378040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.378228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.378287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.378502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.378532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.378720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.378749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-15 20:40:14.378968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-15 20:40:14.379000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.379178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.379206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.379411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.379439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.379695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.379746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.379918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.379945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.380110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.380139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.380394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.380446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.380657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.380686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.380874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.380916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.381113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.381141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.381303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.381327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.381517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.381545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.381743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.381768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.381946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.381973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.382171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.382200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.382444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.382472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.382707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.382735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.382915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.382960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.383142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.383171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.383356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.383385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.383603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.383631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.383798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.383823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.384031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.384060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.384276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.384304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.384494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.384522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.384712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.384737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.384929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.384957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.385204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.385236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.385478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.385505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.385662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.385687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.385865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.385895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.386154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.386182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.386415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.386443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.386705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.386752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.386952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.386978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.387151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.387179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.387404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.387432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.387647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.387675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.387892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.387918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.388183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.388212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.388396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.388424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-15 20:40:14.388640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-15 20:40:14.388668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.388859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.388889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.389066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.389091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.389255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.389282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.389536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.389584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.389759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.389784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.389955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.389981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.390177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.390206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.390389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.390416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.390602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.390630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.390823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.390848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.391029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.391055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.391226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.391254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.391464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.391492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.391692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.391717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.391923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.391949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.392200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.392248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.392475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.392503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.392693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.392718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.392887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.392932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.393115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.393143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.393365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.393392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.393606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.393634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.393825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.393850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.394024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.394052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.394265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.394293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.394501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.394529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.394695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.394723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.394947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.394976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.395164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.395192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.395421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.395449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.395671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.395696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.395896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.395925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.396115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.396143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.396377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.396409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.396616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.396640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-15 20:40:14.396794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-15 20:40:14.396819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.397017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.397046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.397267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.397294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.397493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.397521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.397710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.397735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.397910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.397954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.398134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.398159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.398343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.398368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.398502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.398527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.398728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.398756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.398952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.398977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.399147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.399172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.399387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.399412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.399558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.399583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.399724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.399749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.399894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.399920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.400071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.400096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.400250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.400275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.400467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.400495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.400668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.400693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.400867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.400917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.401092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.401117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.401270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.401295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.401500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.401525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.401664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.401689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.401833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.401857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.402037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.402063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.402208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.402233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.402414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.402439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.402584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.402609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.402749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.402775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.403022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.403048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.403224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.403250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.403417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.403442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.403593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.403618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.403789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.403814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.403970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.403996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.404164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.404189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.404438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.404464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.404610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.404636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.404795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.404820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.405001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-15 20:40:14.405027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-15 20:40:14.405214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.405242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.405425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.405453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.405679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.405707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.405874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.405923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.406111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.406136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.406337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.406363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.406532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.406557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.406725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.406750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.406914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.406940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.407136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.407162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.407345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.407372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.407569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.407595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.407745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.407769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.407913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.407939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.408087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.408112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.408265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.408291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.408469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.408494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.408644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.408669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.408820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.408845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.409025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.409052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.409225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.409250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.409420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.409445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.409605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.409632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.409889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.409917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.410115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.410140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.410312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.410337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.410517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.410542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.410717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.410742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.410989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.411014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.411181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.411206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.411362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.411387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.411585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.411610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.411792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.411817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.411972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.411998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.412169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.412193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.412335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.412360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.412525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.412551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.412720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.412745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.412895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.412921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.413071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.413096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.413291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-15 20:40:14.413316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-15 20:40:14.413513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.413539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.413696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.413722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.413907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.413932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.414111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.414136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.414341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.414366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.414532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.414558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.414726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.414751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.414950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.414976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.415150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.415175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.415357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.415382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.415565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.415590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.415760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.415785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.415984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.416010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.416153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.416178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.416425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.416450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.416624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.416649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.416824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.416849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.417030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.417055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.417228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.417258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.417433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.417459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.417608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.417633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.417775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.417811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.417992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.418018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.418186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.418211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.418409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.418435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.418612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.418637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.418785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.418810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.419022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.419048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.419197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.419222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.419358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.419384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.419634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.419663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.419843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.419868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.420017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.420042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.420186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.420212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.420405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.420430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.420627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.420652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.420799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.420825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.421020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.421047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.421221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.421246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.421404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.421429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.421617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.421642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-15 20:40:14.421786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-15 20:40:14.421812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.422021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.422046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.422246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.422271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.422441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.422467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.422621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.422648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.422786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.422812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.422925] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:34:36.114 [2024-07-15 20:40:14.423015] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:36.114 [2024-07-15 20:40:14.423018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.423044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.423293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.423318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.423563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.423588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.423784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.423809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.423968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.423993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.424162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.424188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.424358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.424382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.424590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.424616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.424795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.424821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.424988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.425021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.425218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.425244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.425444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.425469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.425639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.425665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.425832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.425858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.426072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.426098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.426271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.426296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.426447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.426472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.426644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.426669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.426917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.426943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.427108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.427133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.427310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.427342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.427556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.427581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.427756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.427781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.427940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.427967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.428104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.428140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.428311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.428336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.428513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.428539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.428673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.428698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.428902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-15 20:40:14.428928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-15 20:40:14.429110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.429135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.429336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.429362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.429505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.429530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.429682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.429707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.429884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.429910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.430057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.430082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.430228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.430254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.430455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.430484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.430631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.430656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.430825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.430851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.431013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.431038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.431183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.431209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.431358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.431392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.431641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.431666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.431808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.431833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.431980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.432005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.432175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.432201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.432370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.432395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.432644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.432669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.432843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.432868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.433019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.433045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.433201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.433226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.433388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.433413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.433581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.433607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.433783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.433808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.433988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.434015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.434190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.434215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.434386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.434412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.434573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.434599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.434751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.434776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.434976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.435002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.435153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.435179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.435342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.435367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.435564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.435613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.435771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.435805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.435978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.436005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.436177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.436205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.436405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.436432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.436618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.436644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.436843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.436870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.437051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-15 20:40:14.437077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-15 20:40:14.437253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.437278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.437428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.437454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.437649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.437680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.437843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.437868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.438053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.438078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.438248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.438274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.438468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.438498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.438645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.438670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.438848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.438873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.439022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.439047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.439195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.439220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.439395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.439421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.439587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.439612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.439790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.439815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.439961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.439994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.440188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.440213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.440381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.440406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.440577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.440603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.440774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.440803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.440982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.441007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.441153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.441183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.441357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.441390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.441555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.441580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.441721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.441746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.441892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.441918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.442063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.442088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.442264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.442289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.442455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.442480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.442631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.442655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.442860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.442891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.443041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.443068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.443267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.443292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.443444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.443469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.443673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.443698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.443874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.443906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.444088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.444115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.444264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.444289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.444428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.444454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.444623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.444649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.444898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.444924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.445072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.445097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-15 20:40:14.445297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-15 20:40:14.445323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.445465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.445490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.445657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.445682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.445901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.445928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.446094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.446119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.446328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.446353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.446523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.446549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.446727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.446753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.446925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.446951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.447098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.447124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.447282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.447308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.447483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.447509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.447684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.447709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.447860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.447897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.448057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.448098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.448322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.448350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.448525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.448552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.448724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.448750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.448909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.448936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.449113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.449139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.449300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.449327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.449499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.449525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.449727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.449753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.449918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.449944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.450096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.450121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.450312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.450337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.450514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.450539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.450680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.450706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.450860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.450895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.451062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.451087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.451259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.451285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.451457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.451483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.451624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.451650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.451800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.451825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.452007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.452033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.452179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.452204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.452349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.452375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.452519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.452544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.452724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.452749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.452890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.452916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.453088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-15 20:40:14.453113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-15 20:40:14.453311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.453336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.453488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.453513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.453681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.453706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.453863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.453915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.454074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.454102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.454299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.454325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d5c000b90 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.454502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.454529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.454708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.454734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.454886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.454912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.455084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.455109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.455290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.455315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.455460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.455485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.455652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.455677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.455853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.455883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.456076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.456101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.456261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.456286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.456482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.456508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.456643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.456668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.456839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.456864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.457042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.457068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.457252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.457277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.457451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.457477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.457621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.457647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.457843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.457868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.458057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.458083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.458268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.458293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.458465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.458489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.458672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.458697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.458868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.458899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.459066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.459091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.459242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.459267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.118 [2024-07-15 20:40:14.459466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.459492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.459634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.459659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.459862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.459898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.460072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-15 20:40:14.460098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-15 20:40:14.460242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.460266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.460438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.460462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.460642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.460667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.460838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.460866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.461018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.461043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.461216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.461242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.461419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.461444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.461637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.461662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.461838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.461870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.462057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.462082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.462228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.462254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.462396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.462421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.462561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.462586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.462757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.462783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.462929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.462955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.463119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.463145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.463291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.463316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.463465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.463491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.463685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.463711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.463852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.463882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.464035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.464060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.464243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.464268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.464402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.464427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.464599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.464624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.464787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.464812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.464976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.465005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.465183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.465208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.465372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.465398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.465569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.465594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.465765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.465790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.465969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.465995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.466159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.466185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.466355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.466381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.466520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.466545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.466693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.466719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.466895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.466921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.467071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.467095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.467292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.467316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.467484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.467510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.467657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.467682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.467844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-15 20:40:14.467869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-15 20:40:14.468052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.468078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.468274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.468299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.468494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.468519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.468688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.468714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.468861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.468891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.469041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.469067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.469275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.469300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.469441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.469466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.469606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.469631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.469805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.469830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.469983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.470009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.470184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.470213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.470379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.470405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.470576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.470601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.470737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.470762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.470933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.470959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.471127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.471153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.471345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.471369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.471533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.471558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.471701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.471725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.471897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.471923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.472065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.472091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.472241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.472266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.472406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.472431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.472599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.472624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.472800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.472825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.472967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.472993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.473167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.473192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.473365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.473390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.473563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.473588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.473763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.473788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.473928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.473953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.474114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.474139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.474312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.474337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.474512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.474536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.474710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.474735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.474910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.474935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.475112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.475139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.475286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.475317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.475515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.475540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.475712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.475736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-15 20:40:14.475902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-15 20:40:14.475927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.476102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.476127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.476300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.476325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.476471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.476496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.476662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.476688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.476828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.476853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.477031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.477057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.477234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.477259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.477429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.477454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.477593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.477618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.477790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.477815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.477982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.478008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.478156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.478181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.478352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.478377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.478557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.478582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.478749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.478773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.478954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.478980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.479153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.479178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.479343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.479368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.479518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.479544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.479708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.479733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.479903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.479929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.480074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.480099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.480269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.480294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.480462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.480487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.480665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.480690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.480834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.480860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.481030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.481056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.481230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.481255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.481448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.481474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.481623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.481648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.481846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.481871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.482046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.482071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.482214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.482238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.482415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.482441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.482584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.482608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.482776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.482800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.482958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.482984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.483158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.483183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.483317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.483342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.483509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.483534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.483728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.121 [2024-07-15 20:40:14.483752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.121 qpair failed and we were unable to recover it. 00:34:36.121 [2024-07-15 20:40:14.483925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.483951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.484171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.484196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.484377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.484402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.484581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.484606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.484780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.484806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.484979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.485005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.485147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.485172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.485368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.485394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.485568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.485593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.485789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.485814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.485958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.485984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.486160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.486185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.486351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.486376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.486525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.486550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.486716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.486741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.486882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.486907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.487064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.487089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.487231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.487256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.487421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.487446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.487588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.487613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.487777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.487802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.487998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.488023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.488223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.488248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.488415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.488443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.488620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.488645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.488843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.488868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.489048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.489073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.489244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.489269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.489440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.489466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.489639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.489664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.489801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.489825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.490009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.490035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.490232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.490257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.490421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.490446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.122 [2024-07-15 20:40:14.490586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.122 [2024-07-15 20:40:14.490611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.122 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.490819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.490845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.491019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.491044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.491199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.491224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.491371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.491397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.491548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.491574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.491746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.491771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.491916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.491941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.492106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.492131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.492284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.492309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.492496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.492522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.492700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.492725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.492863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.492903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.493038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.493063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.493246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.493271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.493443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.493469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.493630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.493658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.493831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.493856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.494057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.494083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.494257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.494282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.494423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.494448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.494636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.494661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.494812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.494836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.495032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.495058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.495210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.495235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.495434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.495459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.495602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.495627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.495814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.495839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.495985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.496011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.123 [2024-07-15 20:40:14.496175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.123 [2024-07-15 20:40:14.496200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.123 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.496374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.496399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.496553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.496578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.496746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.496753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:36.124 [2024-07-15 20:40:14.496771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.496943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.496969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.497109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.497134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.497276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.497301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.497476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.497501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.497666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.497691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.497838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.497864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.498039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.498064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.498231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.498256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.498403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.498427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.498572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.498597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.498736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.498765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.498905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.498930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.499102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.499127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.499291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.499317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.499467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.499492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.499640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.499665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.499827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.499852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.500023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.500049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.500234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.500259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.500400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.500425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.500569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.500594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.500790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.500815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.500958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.500984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.501215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.501241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.501400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.501425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.501619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.501644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.501849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.501875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.502039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.502066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.502262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.502287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.502462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.502487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.502690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.502716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.502893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.502919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.503069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.503094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.503267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.503293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.503462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.503488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.503633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.503658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.503829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.503855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.124 [2024-07-15 20:40:14.504008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.124 [2024-07-15 20:40:14.504038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.124 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.504193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.504218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.504361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.504386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.504533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.504558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.504709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.504734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.504911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.504959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.505114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.505139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.505317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.505342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.505514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.505539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.505710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.505735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.505909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.505935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.506112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.506137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.506282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.506307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.506476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.506501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.506707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.506732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.506903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.506929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.507081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.507106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.507282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.507307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.507485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.507509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.507703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.507728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.507867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.507897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.508034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.508059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.508220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.508245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.508414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.508439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.508621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.508646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.508795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.508819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.509001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.509026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.509199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.509229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.509410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.509435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.509610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.509635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.509780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.509806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.509979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.510005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.510177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.510202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.510380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.510404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.510604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.510629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.510780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.510805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.511009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.511035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.511200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.511224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.511378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.511404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.511571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.511596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.511746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.511770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.511958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.125 [2024-07-15 20:40:14.511990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.125 qpair failed and we were unable to recover it. 00:34:36.125 [2024-07-15 20:40:14.512186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.512212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.512352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.512377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.512551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.512577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.512749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.512774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.512952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.512978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.513156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.513181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.513356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.513380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.513562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.513587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.513758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.513784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.513959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.513984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.514165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.514190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.514342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.514368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.514534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.514559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.514732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.514758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.514925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.514951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.515119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.515144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.515324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.515349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.515492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.515517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.515712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.515737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.515888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.515913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.516089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.516114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.516255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.516281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.516455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.516480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.516650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.516675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.516845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.516871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.517034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.517059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.517233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.517258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.517431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.517456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.517625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.517649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.517846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.517871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.518024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.518048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.518190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.518215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.518364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.518390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.518564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.518590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.518761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.518787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.518956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.518983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.519156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.519182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.519378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.519403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.519578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.519604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.519773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.519799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.520001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.126 [2024-07-15 20:40:14.520027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.126 qpair failed and we were unable to recover it. 00:34:36.126 [2024-07-15 20:40:14.520196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.520222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.520361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.520386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.520538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.520564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.520736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.520763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.520939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.520965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.521134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.521160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.521333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.521359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.521534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.521561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.521755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.521781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.521949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.521976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.522177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.522203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.522400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.522427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.522600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.522631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.522810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.522837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.523014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.523041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.523240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.523266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.523415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.523442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.523639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.523665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.523801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.523827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.523998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.524025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.524198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.524224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.524372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.524399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.524596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.524622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.524770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.524796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.524952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.524979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.525131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.525157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.525341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.525367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.525512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.525538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.525705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.525732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.525901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.525928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.526062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.526089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.526231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.526257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.526452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.526478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.526675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.526701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.526874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.526907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.127 qpair failed and we were unable to recover it. 00:34:36.127 [2024-07-15 20:40:14.527081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.127 [2024-07-15 20:40:14.527107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.527299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.527325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.527495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.527521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.527688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.527715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.527905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.527936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.528109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.528135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.528272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.528298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.528464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.528491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.528684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.528710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.528851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.528885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.529038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.529067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.529244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.529271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.529448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.529474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.529651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.529677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.529844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.529870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.530045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.530073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.530216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.530244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.530386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.530413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.530593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.530619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.530812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.530838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.531037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.531063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.531235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.531261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.531435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.531461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.531656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.531681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.531856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.531900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.532040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.532066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.532243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.532269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.532410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.532437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.532633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.532659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.532801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.532827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.532974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.533000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.533167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.533193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.533401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.533427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.533598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.533625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.533802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.533828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.533996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.534024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.534218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.534244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.534424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.534450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.534622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.534649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.534800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.534826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.534999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.535026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.128 [2024-07-15 20:40:14.535224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.128 [2024-07-15 20:40:14.535250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.128 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.535424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.535451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.535617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.535644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.535809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.535836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.536014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.536041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.536187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.536213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.536407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.536433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.536574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.536600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.536797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.536823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.536968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.536995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.537190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.537216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.537356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.537383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.537579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.537606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.537749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.537776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.537945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.537973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.538123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.538149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.538311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.538337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.538505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.538531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.538747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.538774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.538940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.538966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.539132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.539159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.539354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.539380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.539542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.539568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.539712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.539738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.539912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.539939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.540088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.540115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.540290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.540318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.540516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.540543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.540714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.540740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.540906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.540932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.541072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.541097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.541266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.541295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.541432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.541458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.541635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.541661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.541799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.541825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.541974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.542000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.542174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.542199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.542370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.542397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.542568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.542593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.542736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.542761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.542912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.542939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.543075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.129 [2024-07-15 20:40:14.543101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.129 qpair failed and we were unable to recover it. 00:34:36.129 [2024-07-15 20:40:14.543298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.543323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.543524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.543550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.543747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.543772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.543952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.543979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.544154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.544179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.544327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.544353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.544565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.544590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.544790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.544816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.544998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.545024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.545199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.545225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.545419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.545445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.545611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.545637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.545802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.545827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.545994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.546021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.546174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.546200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.546377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.546402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.546553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.546582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.546765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.546791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.546993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.547019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.547180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.547206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.547376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.547402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.547548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.547574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.547745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.547771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.547944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.547970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.548135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.548161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.548339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.548365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.548537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.548562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.548737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.548762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.548938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.548964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.549113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.549138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.549320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.549346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.549548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.549573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.549713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.549738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.549886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.549912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.550150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.550175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.550346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.550371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.550551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.550576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.550757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.550782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.550953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.550979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.551129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.551155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.551390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.551415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.130 qpair failed and we were unable to recover it. 00:34:36.130 [2024-07-15 20:40:14.551622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.130 [2024-07-15 20:40:14.551648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.551827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.551853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.552009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.552039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.552191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.552216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.552428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.552453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.552631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.552657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.552827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.552853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.553127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.553153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.553346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.553372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.553546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.553575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.553749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.553774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.553976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.554003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.554180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.554206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.554374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.554400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.554575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.554601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.554809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.554836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.555017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.555044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.555248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.555274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.555445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.555471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.555652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.555678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.555853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.555887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.556036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.556062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.556232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.556258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.556430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.556457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.556632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.556659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.556827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.556853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.557030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.557057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.557228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.557254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.557401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.557427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.557600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.557626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.557800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.557827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.557973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.557999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.558166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.558191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.558368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.558394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.558563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.558589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.558734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.558760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.131 qpair failed and we were unable to recover it. 00:34:36.131 [2024-07-15 20:40:14.558905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.131 [2024-07-15 20:40:14.558932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.559079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.559106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.559276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.559303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.559448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.559475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.559638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.559664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.559830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.559857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.560025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.560051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.560232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.560259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.560406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.560432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.560606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.560632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.560782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.560809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.560953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.560980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.561180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.561206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.561382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.561408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.561584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.561610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.561760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.561787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.561957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.561984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.562133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.562160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.562341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.562369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.562538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.562565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.562708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.562735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.562915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.562943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.563084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.563110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.563260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.563286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.563469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.563496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.563662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.563687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.563834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.563861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.564010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.564039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.564245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.564272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.564447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.564474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.564637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.564663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.564861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.564892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.565069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.565096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.565275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.565302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.565473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.565504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.565650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.565677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.565847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.565873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.566057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.566083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.566229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.566256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.566427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.566454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.566627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.566653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.132 [2024-07-15 20:40:14.566821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.132 [2024-07-15 20:40:14.566847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.132 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.567044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.567071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.567254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.567281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.567421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.567459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.567659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.567685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.567857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.567889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.568063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.568089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.568271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.568298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.568454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.568481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.568681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.568707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.568855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.568887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.569061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.569088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.569286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.569312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.569460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.569486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.569659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.569687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.569857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.569888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.570071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.570098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.570299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.570326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.570500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.570526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.570726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.570752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.570892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.570924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.571067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.571094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.571266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.571292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.571481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.571508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.571707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.571734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.571873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.571905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.572062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.572089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.572257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.572284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.572451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.572477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.572651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.572677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.572820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.572846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.572994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.573021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.573172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.573198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.573344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.573370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.573545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.573572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.573773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.573799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.573975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.574003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.574178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.574204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.574372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.574398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.574563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.574590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.574788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.574814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.574984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.133 [2024-07-15 20:40:14.575010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.133 qpair failed and we were unable to recover it. 00:34:36.133 [2024-07-15 20:40:14.575158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.575184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.575358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.575384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.575528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.575554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.575732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.575758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.575956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.575983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.576129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.576155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.576302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.576329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.576501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.576527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.576695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.576721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.576852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.576883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.577058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.577095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.577266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.577293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.577434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.577460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.577637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.577662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.577815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.577842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.577996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.578023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.578163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.578189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.578358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.578384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.578559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.578585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.578786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.578812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.578993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.579020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.579164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.579190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.579339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.579365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.579536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.579563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.579708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.579736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.579882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.579910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.580083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.580109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.580286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.580312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.580486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.580513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.580714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.580740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.580889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.580917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.581066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.581093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.581264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.581292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.581475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.581502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.581699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.581726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.581899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.581925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.582072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.582099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.582266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.582292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.582489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.582516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.582668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.582694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.134 [2024-07-15 20:40:14.582870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.134 [2024-07-15 20:40:14.582909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.134 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.583087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.583114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.583286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.583312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.583480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.583505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.583681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.583709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.583853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.583884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.584027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.584058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.584224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.584250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.584404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.584429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.584576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.584602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.584768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.584793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.584965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.584991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.585140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.585165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.585360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.585386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.585578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.585603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.585800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.585825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.586004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.586030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.586203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.586228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.586429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.586455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.586619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.586645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.586786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.586812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.586971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.586997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.587163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.587189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.587341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.587366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.587506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.587532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.587670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.587695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.587839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.587865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.588051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.588077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5600 wit[2024-07-15 20:40:14.588065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:36.135 h addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.588100] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:36.135 [2024-07-15 20:40:14.588115] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:36.135 [2024-07-15 20:40:14.588127] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:36.135 [2024-07-15 20:40:14.588138] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:36.135 [2024-07-15 20:40:14.588196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:36.135 [2024-07-15 20:40:14.588223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:36.135 [2024-07-15 20:40:14.588302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.588343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.588282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:36.135 [2024-07-15 20:40:14.588286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:36.135 [2024-07-15 20:40:14.588529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.588558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.588712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.588739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.588917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.588945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.592891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.592942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.593145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.593176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.593363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.593393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.593585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.593615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.593776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.593804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.594006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.594036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.135 [2024-07-15 20:40:14.594189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.135 [2024-07-15 20:40:14.594218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.135 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.594399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.594427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.594592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.594622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.594803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.594832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.595017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.595046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.595220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.595258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.595441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.595470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.595654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.595683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.595944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.595974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.596130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.596158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.596318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.596347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.596615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.596645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.596798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.596826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.597024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.597054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.597214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.597242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.597436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.597466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.597663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.597692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.597843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.597872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.598054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.598083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.598268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.598298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.598479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.598508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.598688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.598716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.598896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.598925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.599083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.599111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.599287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.599315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.599460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.599488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.599666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.599693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.599860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.599897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.600085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.600116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.600296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.600326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.600504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.600533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.600717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.600745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.600980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.601010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.601170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.601199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.601404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.601432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.603890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.136 [2024-07-15 20:40:14.603924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.136 qpair failed and we were unable to recover it. 00:34:36.136 [2024-07-15 20:40:14.604120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.137 [2024-07-15 20:40:14.604152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.137 qpair failed and we were unable to recover it. 00:34:36.137 [2024-07-15 20:40:14.604343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.137 [2024-07-15 20:40:14.604374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.137 qpair failed and we were unable to recover it. 00:34:36.137 [2024-07-15 20:40:14.604532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.137 [2024-07-15 20:40:14.604563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.137 qpair failed and we were unable to recover it. 00:34:36.137 [2024-07-15 20:40:14.604747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.137 [2024-07-15 20:40:14.604778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.137 qpair failed and we were unable to recover it. 00:34:36.137 [2024-07-15 20:40:14.604973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.137 [2024-07-15 20:40:14.605003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.137 qpair failed and we were unable to recover it. 00:34:36.137 [2024-07-15 20:40:14.605217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.137 [2024-07-15 20:40:14.605247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.137 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 20:40:14.605401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 20:40:14.605432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 20:40:14.605607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 20:40:14.605637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 20:40:14.605795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 20:40:14.605824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 20:40:14.606000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 20:40:14.606035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 20:40:14.606199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 20:40:14.606228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 20:40:14.606488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.606517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.606699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.606728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.606893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.606923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.607081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.607109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.607317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.607346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.607527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.607556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.607743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.607772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.607945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.607974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.608185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.608214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.608375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.608403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.608551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.608579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.608795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.608824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.609888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.609922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.610144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.610200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.610419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.610461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.610687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.610728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.610907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.610949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.611128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.611179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.611400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.611441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.611655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.611695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.611894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.611935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.612108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.612147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.612346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.612385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.612604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.612644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.612830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.612890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.613096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.613136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.613341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.613380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.613589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.613642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.613920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.613962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.614231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.614271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.614465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.614506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.614698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.614737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.614957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.614998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.615236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.615276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.615475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.615516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.615690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.615731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.615937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.615977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.616159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.616199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.616420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.616464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 20:40:14.616669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 20:40:14.616706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.616893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.616934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.617143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.617189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.617396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.617437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.617657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.617695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.617941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.617983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.618217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.618267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.618446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.618487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.618744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.618784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.619029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.619071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.619267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.619306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.619519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.619559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.619777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.619817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.620039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.620081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.620268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.620308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.620542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.620582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.620760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.620799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.620993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.621035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.621215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.621263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.621472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.621509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.621712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.621751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.621930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.621969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.622170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.622216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.622427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.622466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.622672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.622721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.622911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.622950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.623143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.623195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.623358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.623387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.623594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.623622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.623790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.623817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.624077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.624106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.624283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.624311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.624444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.624471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.624628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.624656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.624797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.624824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.625001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.625028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.625169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.625197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.625366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.625392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.625544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 20:40:14.625571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 20:40:14.625746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.625779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.625965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.625993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.626145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.626181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.626354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.626380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.626529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.626557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.626774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.626801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.626980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.627008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.627149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.627177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.627326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.627353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.627527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.627564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.627851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.627893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.628053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.628080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.628223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.628249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.628420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.628457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.628636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.628663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.628835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.628861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.629023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.629051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.629218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.629245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.629432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.629460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.629653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.629683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.629837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.629863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.630031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.630058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.630209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.630236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.630407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.630434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.630588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.630615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.630762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.630788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.630934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.630961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.631138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.631176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.631328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.631365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.631529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.631557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.631769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.631796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.631986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.632013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.632215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.632241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.632497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.632523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.632689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.632715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.632891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.632919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.633090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.633117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.633338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.633365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.633535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.633562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.633732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 20:40:14.633759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 20:40:14.633901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.633933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.634079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.634108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.634257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.634284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.634499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.634526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.634690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.634722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.634950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.634978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.635122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.635149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.635298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.635325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.635504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.635531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.635715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.635742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.635888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.635916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.636122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.636148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.636438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.636465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.636639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.636666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.636844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.636885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.637056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.637082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.637260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.637287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.637461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.637489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.637653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.637680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.637834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.637863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.638060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.638087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.638243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.638270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.638439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.638466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.638635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.638662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.638832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.638858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.639039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.639065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.639203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.639230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.639415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.639445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.639628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.639655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.639829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.639856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.639998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.640025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.640178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.640205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.640488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.640515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.640679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.640706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.640857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.640892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.641045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.641072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.641223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.641249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.641424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.641451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.641619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.641647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 20:40:14.641827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 20:40:14.641853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.642073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.642101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.642289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.642317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.642471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.642498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.642696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.642723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.642914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.642941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.643140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.643167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.643314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.643340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.643482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.643508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.643656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.643682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.643885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.643912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.644047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.644074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.644251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.644278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.644433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.644459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.644631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.644660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.644833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.644860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.645018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.645045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.645183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.645209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.645405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.645432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.645605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.645631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.645782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.645810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.645963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.645991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.646164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.646191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.646364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.646390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.646577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.646604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.646746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.646772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.646930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.646957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.647158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.647184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.647343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.647375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.647581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.647609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.647762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.647789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.647984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.648011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.648180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 20:40:14.648206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 20:40:14.648409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.648436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.648581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.648608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.648779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.648806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.648969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.648996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.649280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.649318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.650246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.650287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.650463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.650491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.650664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.650690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.650859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.650894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.651044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.651070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.651237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.651275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.651423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.651450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.651609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.651637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.651804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.651831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.651982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.652010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.652187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.652214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.652360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.652387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.652534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.652561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.652757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.652784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.652962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.652990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.653205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.653232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.653392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.653418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.653635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.653662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.653809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.653836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.653992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.654019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.654162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.654189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.654398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.654424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.654585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.654612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.654750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.654788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.654939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.654966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.655131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.655157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.655329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.655356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.655544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.655571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.655746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.655773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.655915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.655942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.656085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.656116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.656318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.656356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.656528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.656554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.656735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.656761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 20:40:14.656905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 20:40:14.656933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.657179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.657206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.657401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.657427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.657605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.657631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.657785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.657812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.657963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.657990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.658169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.658207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.658389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.658427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.658576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.658603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.658779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.658806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.658955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.658982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.659130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.659157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.659333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.659370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.659513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.659539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.659682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.659708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.659887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.659914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.660085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.660111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.660277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.660303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.660470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.660497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.660650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.660676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.660913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.660941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.661099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.661126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.661344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.661380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.661529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.661555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.661713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.661746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.661920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.661948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.662128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.662155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.662335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.662362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.662521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.662547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.662708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.662734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.662928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.662955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.663114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.663141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.663327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.663354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.663522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.663549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.663689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.663727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.663864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.663903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.664072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.664103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.664399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.664425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.664597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.664623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 20:40:14.664787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 20:40:14.664815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.664976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.665004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.665213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.665249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.665408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.665435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.665570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.665597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.665772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.665798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.665936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.665963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.666161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.666187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.666337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.666363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.666533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.666560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.666698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.666724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.666925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.666953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.667099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.667127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.667279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.667307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.667442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.667468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.667673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.667700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.667849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.667881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.668068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.668095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.668260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.668288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.668449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.668475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.668631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.668658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.668845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.668872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.669035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.669063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.669247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.669274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.669460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.669486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.669652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.669679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.669851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.669902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.670055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.670081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.670247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.670273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.670415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.670452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.670601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.670627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.670814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.670841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.670998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.671025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.671214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.671241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.671379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.671411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.671567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.671594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.671769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.671796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.671976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.672008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.672177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.672214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.672390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 20:40:14.672417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 20:40:14.672560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.672586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.672735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.672762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.672911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.672939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.673080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.673106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.673272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.673304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.673457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.673485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.673639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.673665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.673840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.673868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.674016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.674044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.674202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.674229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.674383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.674410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.674560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.674587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.674766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.674803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.674953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.674980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.675165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.675202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.675360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.675387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.675584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.675610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.675752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.675779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.675936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.675964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.676124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.676151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.676294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.676322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.676501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.676528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.676675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.676702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.676883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.676910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.677087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.677115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.677267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.677293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.677445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.677473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.677646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.677672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.677857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.677904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.678044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.678071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.678249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.678275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.678444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.678478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.678626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.678653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.678817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.678843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 20:40:14.679029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 20:40:14.679057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.679194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.679220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.679398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.679435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.679575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.679605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.679804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.679831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.679996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.680023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.680228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.680258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.680398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.680426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.680579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.680606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.680765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.680793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.680944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.680972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.681150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.681177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.681356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.681382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.681575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.681602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.681754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.681781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.681955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.681982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.682122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.682148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.682330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.682357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.682498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.682524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.682693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.682719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.682857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.682889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.683087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.683113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.683468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.683508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.683714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.683740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.683922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.683949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.684094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.684120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.684271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.684297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.684470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.684498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.684644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.684671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.684816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.684843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.685000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.685027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.685212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.685243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.685412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.685444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.685600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.685626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.685799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.685825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.685998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.686026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.686199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.686225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.686417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.686443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.686639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.686666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.686799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.686825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.687032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 20:40:14.687059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 20:40:14.687204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.687231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.688081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.688111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.688325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.688357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.688524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.688551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.688711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.688739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.688926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.688953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.689137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.689163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.689348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.689375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.689512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.689539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.689694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.689720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.689888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.689916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.690111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.690139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.690311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.690338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.690477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.690503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.690673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.690699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.690835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.690862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.691035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.691063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.691249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.691277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.691443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.691469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.691623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.691650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.691800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.691827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.691973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.692000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.692161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.692190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.692336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.692363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.692508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.692535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.692713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.692740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.692946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.692973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.693121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.693148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.693299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.693326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.693494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.693521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.693715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.693742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.693905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.693933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.694099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.694125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.694298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.694325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.694468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.694494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.694667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.694694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.694846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.694890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.695050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.695077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.695270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.695301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.695443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 20:40:14.695469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 20:40:14.695639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.695665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.695841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.695869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.696061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.696092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.696234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.696261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.696419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.696445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.696591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.696619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.696818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.696845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.697014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.697042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.697214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.697240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.697382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.697410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.697568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.697596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.697807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.697835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.697998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.698026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.698190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.698216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.698390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.698418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.698585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.698611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.698786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.698813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.698958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.698986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.699156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.699194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.699366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.699393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.699571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.699598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.699767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.699793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.699949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.699976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.700144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.700171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.700355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.700386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.700533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.700559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.700719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.700746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.700916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.700943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.701081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.701107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.701268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.701295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.701444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.701470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.701635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.701662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.701829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.701856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.702039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.702066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.702208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.702240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.702389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.702423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.702569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.702595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.702791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.702817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.702967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.702994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.703189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.703216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 20:40:14.703415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 20:40:14.703442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.703629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.703659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.703839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.703869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.704057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.704084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.704257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.704284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.704432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.704459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.704635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.704661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.704806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.704834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.704986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.705013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.705166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.705193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.705365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.705400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.705578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.705604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.705769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.705796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.705964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.705992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.706126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.706154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.706329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.706356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.706528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.706560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.706741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.706768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.706917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.706946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.707117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.707145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.707321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.707348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.707554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.707592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.707750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.707778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.707963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.707991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.708139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.708166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.708348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.708381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.708552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.708580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.708754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.708781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.708925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.708952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.709113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.709140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.709323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.709350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.709522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.709549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.709723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.709750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.709921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.709949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.710119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.710146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.710360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.710387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.710562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.710589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.710738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 20:40:14.710764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 20:40:14.710909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.710937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.711078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.711106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.711315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.711342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.711477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.711504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.711656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.711686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.711856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.711888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.712029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.712056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.712270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.712296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.712455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.712481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.712641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.712667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.712801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.712827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.713014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.713040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.713195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.713221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.713367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.713395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.713569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.713595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.713751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.713778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.713955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.713982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.714152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.714188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.714387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.714413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.714571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.714599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.714749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.714777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.714936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.714964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.715140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.715177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.715361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.715388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.715560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.715587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.715764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.715790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.715937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.715966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.716151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.716178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.716352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.716378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.716519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.716545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.716685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.716713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.716880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.716908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.717059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.717087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.717251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.717277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.717474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.717500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.717649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.717676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 20:40:14.717840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 20:40:14.717866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.718026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.718052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.718223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.718250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.718423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.718450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.718621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.718647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.718783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.718810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.718995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.719023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.719177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.719204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.719375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.719405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.719563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.719589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.719761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.719787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.719940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.719968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.720115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.720143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.720378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.720405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.720564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.720591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.720727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.720753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.720907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.720934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.721112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.721138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.721327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.721353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.721530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.721556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:36.421 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:36.421 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:36.421 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:36.421 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.421 [2024-07-15 20:40:14.722309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.722340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.722500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.722529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.723490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.723522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.723739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.723767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.723915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.723944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.724103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.724130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.724316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.724344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.724494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.724521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.725284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.725315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.725518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.725546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.725720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.725749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.725905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.725933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.726077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.726104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.726246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.726278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.726447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.726474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.726629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.726655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.726801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.726833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.726990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.727017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.727207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.727233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 20:40:14.727390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 20:40:14.727416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.727559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.727586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.727738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.727764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.727928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.727956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.728118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.728145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.728336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.728362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.728502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.728528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.728697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.728734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.728918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.728945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.729087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.729114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.729284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.729311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.729453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.729480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.729633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.729660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.729856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.729891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.730032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.730059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.730199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.730225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.730369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.730396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.730547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.730573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.730772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.730798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.730968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.730995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.731166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.731193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.731338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.731364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.731506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.731533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.731682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.731710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.731888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.731915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.732080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.732106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.732263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.732291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.732470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.732506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.732676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.732703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.732843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.732869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.733025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.733052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.733198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.733225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.733369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.733395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.733584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.733612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.733748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.733780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.733972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.734000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.734165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.734199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.734344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.734371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.734535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.734562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.734706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.734732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.734910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 20:40:14.734938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 20:40:14.735123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.735150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.735324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.735350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.735492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.735519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.735679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.735706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.735882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.735909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.736078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.736105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.736248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.736275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.736444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.736471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.736618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.736644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.736805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.736831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.736991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.737018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.737154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.737187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.737367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.737394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.737538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.737565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.737711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.737737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.737888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.737915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.738086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.738113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.738272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.738298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.738440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.738474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.738653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.738680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.738834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.738862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.739023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.739050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.739226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.739264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.739421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.739447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.739591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.739618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.739778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.739804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:36.423 [2024-07-15 20:40:14.739989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.740016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:36.423 [2024-07-15 20:40:14.740170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.740208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.423 [2024-07-15 20:40:14.740363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.423 [2024-07-15 20:40:14.740392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.740560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.740587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.740757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.740783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.740969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.741001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.741168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.741205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.741344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.741371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.741513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.741543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.741710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.741737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.741929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.741958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.742128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.742155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 20:40:14.742316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 20:40:14.742343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.742518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.742544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.742680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.742706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.742863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.742901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.743095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.743121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.743291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.743317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.743483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.743509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.743685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.743711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.743861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.743900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.744034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.744061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.744202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.744228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.744429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.744455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.744599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.744625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.744762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.744788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.744938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.744966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.745139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.745166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.745333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.745360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.745501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.745527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.745689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.745715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.745864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.745900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.746090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.746117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.746310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.746336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.746515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.746542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.746701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.746728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.746900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.746927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.747071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.747098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.747248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.747275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.747421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.747448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.747619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.747646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.747797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.747823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.747981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.748009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.748208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.748235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.748401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.748428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.748569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.748600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.748774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 20:40:14.748800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 20:40:14.748955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.748983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.749184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.749211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.749415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.749441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.749598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.749624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.749786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.749812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.749970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.749997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.750167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.750205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.750410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.750436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.750608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.750634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.750819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.750845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.751041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.751068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.751216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.751243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.751393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.751420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.751620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.751646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.751785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.751812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.751965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.751992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.752145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.752173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.752320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.752346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.752511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.752538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.752676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.752703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.752901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.752929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.753073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.753100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.753240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.753267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.753435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.753462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.753606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.753632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.753806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.753833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.754020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.754048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.754205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.754231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.754402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.754429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.754573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.754599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.754744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.754770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.754921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.754948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.755211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.755249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.755456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.755482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.755628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.755656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.755799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.755826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.756013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.756041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.756182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.756212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.756361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.756393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 20:40:14.756567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 20:40:14.756594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.756770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.756796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.756950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.756978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.757152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.757179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.757334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.757360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.757534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.757560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.757836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.757863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.758059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.758086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.758279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.758305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.758469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.758495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.758668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.758694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.758869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.758902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.759046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.759073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.759217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.759244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.759388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.759415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.759605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.759632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.759827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.759853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.760014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.760041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.760240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.760267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.760407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.760434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.760576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.760602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.760773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.760800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.760988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.761016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.761186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.761212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.761384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.761411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.761548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.761575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.761770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.761813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.761982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.762010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.762159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.762198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.762350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.762377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.762559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.762586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.762734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.762765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.762951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.762979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.763191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.763218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.763359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.763385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.763584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.763610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.763780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.763806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.763956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.763982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.764164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.764192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.764363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 20:40:14.764394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 20:40:14.764534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.764560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.764708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.764734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.764885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 Malloc0 00:34:36.427 [2024-07-15 20:40:14.764912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.765056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.765083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.765233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.765261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.427 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.765404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:36.427 [2024-07-15 20:40:14.765431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.427 [2024-07-15 20:40:14.765567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.765594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.427 [2024-07-15 20:40:14.765793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.765820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.766008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.766036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.766191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.766217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d54000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.766397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.766426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.766590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.766622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.766771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.766798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.766967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.766995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.767134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.767160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.767342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.767368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.767529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.767555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.767695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.767722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.767884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.767911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.768061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.768087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.768257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.768283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.768482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.768507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.768606] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.427 [2024-07-15 20:40:14.768655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.768681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.768838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.768875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.769058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.769089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.769247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.769272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.769414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.769440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.769615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.769641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.769793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.769819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.769993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.770019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.770158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.770186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.770323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.770349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.770549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.770575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.770750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.770776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.770927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.770954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.771113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.771139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.771343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.771369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.771513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 20:40:14.771539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 20:40:14.771715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.771741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.771885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.771912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.772052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.772078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.772255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.772282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.772418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.772444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.772589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.772616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.772779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.772806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.772996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.773023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.773165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.773191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.773361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.773387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.773557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.773582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.773748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.773774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.773965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.773991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.774170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.774196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.774375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.774401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.774579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.774605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.774745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.774771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.774946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.774974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.775138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.775164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.775311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.775338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.775511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.775537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.775706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.775731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.775893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.775919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.776072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.776098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.776245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.776271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.776405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.776432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.776578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.776608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.776776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.776802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.428 [2024-07-15 20:40:14.776955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.776981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:36.428 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.428 [2024-07-15 20:40:14.777151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.777179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.428 [2024-07-15 20:40:14.777318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.777344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.777511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.777538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.777710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 20:40:14.777736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 20:40:14.777904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.777931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.778086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.778112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.778290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.778316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.778490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.778517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.778675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.778701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.778865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.778898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.779072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.779098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.779267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.779293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.779446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.779472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.779640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.779667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.779836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.779862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.780022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.780048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.780222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.780248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.780390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.780417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.780565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.780591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.780753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.780778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.780927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.780954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.781126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.781152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.781310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.781336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.781477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.781503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.781682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.781708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.781858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.781889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.782033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.782058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.782226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.782252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.782438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.782464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.782632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.782658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.782820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.782846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.783007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.783033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.783176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.783203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.783374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.783401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.783535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.783561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.783708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.783738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.783913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.783940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.784090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.784116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.784295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.784321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.784468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.784493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.784667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.784693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 20:40:14.784831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.784857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.429 [2024-07-15 20:40:14.785002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.785028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:36.429 [2024-07-15 20:40:14.785167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 20:40:14.785194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.430 [2024-07-15 20:40:14.785363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.785389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.785541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.785568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.785770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.785796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.785954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.785981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.786141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.786176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.786321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.786351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.786490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.786515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.786705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.786731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.786866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.786898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.787045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.787071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.787251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.787277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.787413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.787439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.787594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.787620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.787767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.787793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.787974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.788002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.788169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.788195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.788347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.788377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.788576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.788601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.788741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.788767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.788911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.788938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.789080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.789106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.789286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.789312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.789474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.789500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.789645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.789671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.789818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.789844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.790023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.790050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.790209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.790235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.790382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.790410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.790584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.790610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.790779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.790805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.790987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.791014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.791168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.791194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.791378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.791404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.791550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.791576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.791713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.791739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.791926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.791952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.792096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.792122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.792261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.792287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.792423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.792448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 20:40:14.792614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 20:40:14.792640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.792806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.792833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.431 [2024-07-15 20:40:14.792997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.793023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:36.431 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.431 [2024-07-15 20:40:14.793194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.793221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.431 [2024-07-15 20:40:14.793391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.793417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.793583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.793609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.793752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.793777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.793922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.793948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.794100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.794125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.794268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.794296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.794465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.794492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.794667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.794693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.794824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.794850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.795030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.795056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.795206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.795232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.795396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.795422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.795570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.795596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.795792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.795818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.796000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.796027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.796170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.796196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.796340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.796367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.796505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.796531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.796669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 20:40:14.796695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d64000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.796869] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.431 [2024-07-15 20:40:14.799361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.431 [2024-07-15 20:40:14.799555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.431 [2024-07-15 20:40:14.799582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.431 [2024-07-15 20:40:14.799598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.431 [2024-07-15 20:40:14.799612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.431 [2024-07-15 20:40:14.799646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.431 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:36.431 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.431 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.431 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.431 20:40:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 16184 00:34:36.431 [2024-07-15 20:40:14.809196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.431 [2024-07-15 20:40:14.809345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.431 [2024-07-15 20:40:14.809372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.431 [2024-07-15 20:40:14.809388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.431 [2024-07-15 20:40:14.809402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.431 [2024-07-15 20:40:14.809433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.819259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.431 [2024-07-15 20:40:14.819409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.431 [2024-07-15 20:40:14.819436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.431 [2024-07-15 20:40:14.819451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.431 [2024-07-15 20:40:14.819465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.431 [2024-07-15 20:40:14.819496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.829326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.431 [2024-07-15 20:40:14.829477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.431 [2024-07-15 20:40:14.829503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.431 [2024-07-15 20:40:14.829519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.431 [2024-07-15 20:40:14.829533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.431 [2024-07-15 20:40:14.829563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 20:40:14.839243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.431 [2024-07-15 20:40:14.839397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.431 [2024-07-15 20:40:14.839424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.431 [2024-07-15 20:40:14.839439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.431 [2024-07-15 20:40:14.839454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.432 [2024-07-15 20:40:14.839484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 20:40:14.849287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.432 [2024-07-15 20:40:14.849444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.432 [2024-07-15 20:40:14.849470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.432 [2024-07-15 20:40:14.849491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.432 [2024-07-15 20:40:14.849507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.432 [2024-07-15 20:40:14.849537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 20:40:14.859290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.432 [2024-07-15 20:40:14.859439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.432 [2024-07-15 20:40:14.859465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.432 [2024-07-15 20:40:14.859480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.432 [2024-07-15 20:40:14.859494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.432 [2024-07-15 20:40:14.859524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 20:40:14.869300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.432 [2024-07-15 20:40:14.869455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.432 [2024-07-15 20:40:14.869481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.432 [2024-07-15 20:40:14.869496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.432 [2024-07-15 20:40:14.869510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.432 [2024-07-15 20:40:14.869541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 20:40:14.879287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.432 [2024-07-15 20:40:14.879443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.432 [2024-07-15 20:40:14.879469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.432 [2024-07-15 20:40:14.879484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.432 [2024-07-15 20:40:14.879496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.432 [2024-07-15 20:40:14.879527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 20:40:14.889318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.432 [2024-07-15 20:40:14.889462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.432 [2024-07-15 20:40:14.889490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.432 [2024-07-15 20:40:14.889505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.432 [2024-07-15 20:40:14.889519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.432 [2024-07-15 20:40:14.889549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 20:40:14.899393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.432 [2024-07-15 20:40:14.899542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.432 [2024-07-15 20:40:14.899568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.432 [2024-07-15 20:40:14.899584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.432 [2024-07-15 20:40:14.899598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.432 [2024-07-15 20:40:14.899628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 20:40:14.909378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.432 [2024-07-15 20:40:14.909534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.432 [2024-07-15 20:40:14.909559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.432 [2024-07-15 20:40:14.909575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.432 [2024-07-15 20:40:14.909589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.432 [2024-07-15 20:40:14.909619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 20:40:14.919437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.432 [2024-07-15 20:40:14.919596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.432 [2024-07-15 20:40:14.919623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.432 [2024-07-15 20:40:14.919638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.432 [2024-07-15 20:40:14.919653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.432 [2024-07-15 20:40:14.919682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.691 [2024-07-15 20:40:14.929477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.691 [2024-07-15 20:40:14.929637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.691 [2024-07-15 20:40:14.929664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.691 [2024-07-15 20:40:14.929680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.691 [2024-07-15 20:40:14.929709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.691 [2024-07-15 20:40:14.929738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.691 qpair failed and we were unable to recover it. 00:34:36.691 [2024-07-15 20:40:14.939486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.691 [2024-07-15 20:40:14.939636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.691 [2024-07-15 20:40:14.939666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.691 [2024-07-15 20:40:14.939682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.691 [2024-07-15 20:40:14.939696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.691 [2024-07-15 20:40:14.939726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.691 qpair failed and we were unable to recover it. 00:34:36.691 [2024-07-15 20:40:14.949499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.691 [2024-07-15 20:40:14.949659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.691 [2024-07-15 20:40:14.949684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.691 [2024-07-15 20:40:14.949699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.691 [2024-07-15 20:40:14.949713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.691 [2024-07-15 20:40:14.949742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.691 qpair failed and we were unable to recover it. 00:34:36.691 [2024-07-15 20:40:14.959554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.691 [2024-07-15 20:40:14.959706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.691 [2024-07-15 20:40:14.959732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.691 [2024-07-15 20:40:14.959747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.691 [2024-07-15 20:40:14.959761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.691 [2024-07-15 20:40:14.959792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.691 qpair failed and we were unable to recover it. 00:34:36.691 [2024-07-15 20:40:14.969583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.691 [2024-07-15 20:40:14.969740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.691 [2024-07-15 20:40:14.969769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.691 [2024-07-15 20:40:14.969786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.691 [2024-07-15 20:40:14.969815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.691 [2024-07-15 20:40:14.969846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.691 qpair failed and we were unable to recover it. 00:34:36.691 [2024-07-15 20:40:14.979583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.691 [2024-07-15 20:40:14.979748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.691 [2024-07-15 20:40:14.979774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.691 [2024-07-15 20:40:14.979789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.691 [2024-07-15 20:40:14.979802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.691 [2024-07-15 20:40:14.979853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.691 qpair failed and we were unable to recover it. 00:34:36.691 [2024-07-15 20:40:14.989612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.691 [2024-07-15 20:40:14.989770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.691 [2024-07-15 20:40:14.989796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.691 [2024-07-15 20:40:14.989811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.691 [2024-07-15 20:40:14.989839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.691 [2024-07-15 20:40:14.989868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.691 qpair failed and we were unable to recover it. 00:34:36.691 [2024-07-15 20:40:14.999665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.691 [2024-07-15 20:40:14.999851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.691 [2024-07-15 20:40:14.999884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.691 [2024-07-15 20:40:14.999902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.691 [2024-07-15 20:40:14.999916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.691 [2024-07-15 20:40:14.999947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.691 qpair failed and we were unable to recover it. 00:34:36.691 [2024-07-15 20:40:15.009679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.691 [2024-07-15 20:40:15.009824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.691 [2024-07-15 20:40:15.009850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.691 [2024-07-15 20:40:15.009865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.691 [2024-07-15 20:40:15.009887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.691 [2024-07-15 20:40:15.009920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.691 qpair failed and we were unable to recover it. 00:34:36.691 [2024-07-15 20:40:15.019728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.691 [2024-07-15 20:40:15.019873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.691 [2024-07-15 20:40:15.019910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.691 [2024-07-15 20:40:15.019927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.691 [2024-07-15 20:40:15.019940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.691 [2024-07-15 20:40:15.019970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.691 qpair failed and we were unable to recover it. 00:34:36.691 [2024-07-15 20:40:15.029716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.691 [2024-07-15 20:40:15.029863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.691 [2024-07-15 20:40:15.029906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.691 [2024-07-15 20:40:15.029929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.691 [2024-07-15 20:40:15.029943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.691 [2024-07-15 20:40:15.029973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.691 qpair failed and we were unable to recover it. 00:34:36.691 [2024-07-15 20:40:15.039769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.691 [2024-07-15 20:40:15.039936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.691 [2024-07-15 20:40:15.039963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.691 [2024-07-15 20:40:15.039977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.039991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.040020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.049811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.049958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.049986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.050001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.050014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.050044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.059812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.059967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.059994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.060010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.060023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.060053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.069802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.069952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.069979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.069994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.070008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.070043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.079873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.080032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.080059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.080074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.080087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.080117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.089909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.090097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.090123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.090139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.090153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.090183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.099934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.100083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.100109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.100124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.100137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.100168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.109926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.110077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.110103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.110119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.110132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.110161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.119960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.120102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.120134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.120150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.120163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.120194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.130001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.130166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.130194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.130214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.130244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.130274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.140136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.140280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.140307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.140323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.140337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.140367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.150054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.150206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.150233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.150248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.150262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.150293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.160081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.160224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.160251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.160269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.160288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.160334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.170098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.170254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.170281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.170296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.692 [2024-07-15 20:40:15.170309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.692 [2024-07-15 20:40:15.170338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.692 qpair failed and we were unable to recover it. 00:34:36.692 [2024-07-15 20:40:15.180135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.692 [2024-07-15 20:40:15.180276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.692 [2024-07-15 20:40:15.180303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.692 [2024-07-15 20:40:15.180318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.693 [2024-07-15 20:40:15.180332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.693 [2024-07-15 20:40:15.180362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.693 qpair failed and we were unable to recover it. 00:34:36.693 [2024-07-15 20:40:15.190177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.693 [2024-07-15 20:40:15.190327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.693 [2024-07-15 20:40:15.190354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.693 [2024-07-15 20:40:15.190369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.693 [2024-07-15 20:40:15.190382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.693 [2024-07-15 20:40:15.190412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.693 qpair failed and we were unable to recover it. 00:34:36.693 [2024-07-15 20:40:15.200238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.693 [2024-07-15 20:40:15.200409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.693 [2024-07-15 20:40:15.200437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.693 [2024-07-15 20:40:15.200455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.693 [2024-07-15 20:40:15.200471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.693 [2024-07-15 20:40:15.200515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.693 qpair failed and we were unable to recover it. 00:34:36.693 [2024-07-15 20:40:15.210232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.693 [2024-07-15 20:40:15.210394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.693 [2024-07-15 20:40:15.210422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.693 [2024-07-15 20:40:15.210438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.693 [2024-07-15 20:40:15.210452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.693 [2024-07-15 20:40:15.210481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.693 qpair failed and we were unable to recover it. 00:34:36.951 [2024-07-15 20:40:15.220334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.951 [2024-07-15 20:40:15.220483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.951 [2024-07-15 20:40:15.220510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.951 [2024-07-15 20:40:15.220525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.951 [2024-07-15 20:40:15.220539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.951 [2024-07-15 20:40:15.220583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.951 qpair failed and we were unable to recover it. 00:34:36.951 [2024-07-15 20:40:15.230308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.951 [2024-07-15 20:40:15.230461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.951 [2024-07-15 20:40:15.230487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.951 [2024-07-15 20:40:15.230503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.951 [2024-07-15 20:40:15.230516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.951 [2024-07-15 20:40:15.230547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.951 qpair failed and we were unable to recover it. 00:34:36.951 [2024-07-15 20:40:15.240334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.951 [2024-07-15 20:40:15.240482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.951 [2024-07-15 20:40:15.240509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.951 [2024-07-15 20:40:15.240524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.951 [2024-07-15 20:40:15.240537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.951 [2024-07-15 20:40:15.240567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.951 qpair failed and we were unable to recover it. 00:34:36.951 [2024-07-15 20:40:15.250447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.951 [2024-07-15 20:40:15.250591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.951 [2024-07-15 20:40:15.250619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.951 [2024-07-15 20:40:15.250639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.951 [2024-07-15 20:40:15.250653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.951 [2024-07-15 20:40:15.250684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.951 qpair failed and we were unable to recover it. 00:34:36.951 [2024-07-15 20:40:15.260391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.260530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.260556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.260571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.260585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.260614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.270456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.270605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.270631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.270646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.270675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.270704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.280436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.280615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.280642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.280657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.280671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.280700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.290484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.290665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.290691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.290706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.290720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.290749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.300480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.300628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.300654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.300669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.300683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.300713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.310550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.310716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.310742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.310773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.310786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.310815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.320580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.320733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.320760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.320775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.320789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.320818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.330695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.330855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.330893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.330911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.330924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.330955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.340639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.340782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.340809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.340829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.340844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.340911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.350672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.350823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.350851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.350890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.350906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.350950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.360697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.360847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.360873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.360899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.360913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.360942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.370698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.370841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.370867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.370891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.370905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.370936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.380734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.380883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.380910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.380926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.380939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.380968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.390757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.390908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.390934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.952 [2024-07-15 20:40:15.390949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.952 [2024-07-15 20:40:15.390963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.952 [2024-07-15 20:40:15.390993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.952 qpair failed and we were unable to recover it. 00:34:36.952 [2024-07-15 20:40:15.400791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.952 [2024-07-15 20:40:15.400945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.952 [2024-07-15 20:40:15.400971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.953 [2024-07-15 20:40:15.400986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.953 [2024-07-15 20:40:15.400999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.953 [2024-07-15 20:40:15.401029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.953 qpair failed and we were unable to recover it. 00:34:36.953 [2024-07-15 20:40:15.410862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.953 [2024-07-15 20:40:15.411058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.953 [2024-07-15 20:40:15.411084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.953 [2024-07-15 20:40:15.411100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.953 [2024-07-15 20:40:15.411113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.953 [2024-07-15 20:40:15.411142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.953 qpair failed and we were unable to recover it. 00:34:36.953 [2024-07-15 20:40:15.420834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.953 [2024-07-15 20:40:15.420996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.953 [2024-07-15 20:40:15.421023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.953 [2024-07-15 20:40:15.421038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.953 [2024-07-15 20:40:15.421052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.953 [2024-07-15 20:40:15.421081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.953 qpair failed and we were unable to recover it. 00:34:36.953 [2024-07-15 20:40:15.430894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.953 [2024-07-15 20:40:15.431057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.953 [2024-07-15 20:40:15.431089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.953 [2024-07-15 20:40:15.431105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.953 [2024-07-15 20:40:15.431118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.953 [2024-07-15 20:40:15.431148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.953 qpair failed and we were unable to recover it. 00:34:36.953 [2024-07-15 20:40:15.440917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.953 [2024-07-15 20:40:15.441103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.953 [2024-07-15 20:40:15.441132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.953 [2024-07-15 20:40:15.441150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.953 [2024-07-15 20:40:15.441165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.953 [2024-07-15 20:40:15.441210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.953 qpair failed and we were unable to recover it. 00:34:36.953 [2024-07-15 20:40:15.450931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.953 [2024-07-15 20:40:15.451074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.953 [2024-07-15 20:40:15.451102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.953 [2024-07-15 20:40:15.451118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.953 [2024-07-15 20:40:15.451131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.953 [2024-07-15 20:40:15.451161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.953 qpair failed and we were unable to recover it. 00:34:36.953 [2024-07-15 20:40:15.460991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.953 [2024-07-15 20:40:15.461140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.953 [2024-07-15 20:40:15.461166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.953 [2024-07-15 20:40:15.461182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.953 [2024-07-15 20:40:15.461195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.953 [2024-07-15 20:40:15.461240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.953 qpair failed and we were unable to recover it. 00:34:36.953 [2024-07-15 20:40:15.471144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.953 [2024-07-15 20:40:15.471311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.953 [2024-07-15 20:40:15.471340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.953 [2024-07-15 20:40:15.471373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.953 [2024-07-15 20:40:15.471387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:36.953 [2024-07-15 20:40:15.471438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.953 qpair failed and we were unable to recover it. 00:34:37.211 [2024-07-15 20:40:15.481070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.211 [2024-07-15 20:40:15.481267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.211 [2024-07-15 20:40:15.481295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.211 [2024-07-15 20:40:15.481309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.211 [2024-07-15 20:40:15.481322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.211 [2024-07-15 20:40:15.481352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.211 qpair failed and we were unable to recover it. 00:34:37.211 [2024-07-15 20:40:15.491057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.211 [2024-07-15 20:40:15.491213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.211 [2024-07-15 20:40:15.491240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.211 [2024-07-15 20:40:15.491255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.211 [2024-07-15 20:40:15.491269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.211 [2024-07-15 20:40:15.491298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.212 qpair failed and we were unable to recover it. 00:34:37.212 [2024-07-15 20:40:15.501100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.212 [2024-07-15 20:40:15.501248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.212 [2024-07-15 20:40:15.501276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.212 [2024-07-15 20:40:15.501291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.212 [2024-07-15 20:40:15.501305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.212 [2024-07-15 20:40:15.501350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.212 qpair failed and we were unable to recover it. 00:34:37.212 [2024-07-15 20:40:15.511139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.212 [2024-07-15 20:40:15.511287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.212 [2024-07-15 20:40:15.511315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.212 [2024-07-15 20:40:15.511330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.212 [2024-07-15 20:40:15.511344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.212 [2024-07-15 20:40:15.511387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.212 qpair failed and we were unable to recover it. 00:34:37.212 [2024-07-15 20:40:15.521172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.212 [2024-07-15 20:40:15.521319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.212 [2024-07-15 20:40:15.521351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.212 [2024-07-15 20:40:15.521368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.212 [2024-07-15 20:40:15.521381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.212 [2024-07-15 20:40:15.521411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.212 qpair failed and we were unable to recover it. 00:34:37.212 [2024-07-15 20:40:15.531178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.212 [2024-07-15 20:40:15.531316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.212 [2024-07-15 20:40:15.531343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.212 [2024-07-15 20:40:15.531359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.212 [2024-07-15 20:40:15.531372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.212 [2024-07-15 20:40:15.531402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.212 qpair failed and we were unable to recover it. 00:34:37.212 [2024-07-15 20:40:15.541228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.212 [2024-07-15 20:40:15.541389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.212 [2024-07-15 20:40:15.541415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.212 [2024-07-15 20:40:15.541431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.212 [2024-07-15 20:40:15.541444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.212 [2024-07-15 20:40:15.541489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.212 qpair failed and we were unable to recover it. 00:34:37.212 [2024-07-15 20:40:15.551227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.212 [2024-07-15 20:40:15.551435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.212 [2024-07-15 20:40:15.551462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.212 [2024-07-15 20:40:15.551477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.212 [2024-07-15 20:40:15.551490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.212 [2024-07-15 20:40:15.551519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.212 qpair failed and we were unable to recover it. 00:34:37.212 [2024-07-15 20:40:15.561247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.212 [2024-07-15 20:40:15.561392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.212 [2024-07-15 20:40:15.561418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.212 [2024-07-15 20:40:15.561433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.212 [2024-07-15 20:40:15.561468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.212 [2024-07-15 20:40:15.561499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.212 qpair failed and we were unable to recover it. 00:34:37.212 [2024-07-15 20:40:15.571266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.212 [2024-07-15 20:40:15.571410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.212 [2024-07-15 20:40:15.571437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.212 [2024-07-15 20:40:15.571452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.212 [2024-07-15 20:40:15.571465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.212 [2024-07-15 20:40:15.571496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.212 qpair failed and we were unable to recover it. 00:34:37.212 [2024-07-15 20:40:15.581343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.212 [2024-07-15 20:40:15.581511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.212 [2024-07-15 20:40:15.581537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.212 [2024-07-15 20:40:15.581552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.212 [2024-07-15 20:40:15.581565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.212 [2024-07-15 20:40:15.581609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.212 qpair failed and we were unable to recover it. 00:34:37.212 [2024-07-15 20:40:15.591381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.212 [2024-07-15 20:40:15.591545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.212 [2024-07-15 20:40:15.591572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.212 [2024-07-15 20:40:15.591588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.212 [2024-07-15 20:40:15.591617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.212 [2024-07-15 20:40:15.591645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.212 qpair failed and we were unable to recover it. 00:34:37.212 [2024-07-15 20:40:15.601546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.212 [2024-07-15 20:40:15.601709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.212 [2024-07-15 20:40:15.601735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.212 [2024-07-15 20:40:15.601750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.212 [2024-07-15 20:40:15.601763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.212 [2024-07-15 20:40:15.601795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.212 qpair failed and we were unable to recover it. 00:34:37.213 [2024-07-15 20:40:15.611457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.213 [2024-07-15 20:40:15.611607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.213 [2024-07-15 20:40:15.611634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.213 [2024-07-15 20:40:15.611650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.213 [2024-07-15 20:40:15.611663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.213 [2024-07-15 20:40:15.611693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.213 qpair failed and we were unable to recover it. 00:34:37.213 [2024-07-15 20:40:15.621476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.213 [2024-07-15 20:40:15.621622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.213 [2024-07-15 20:40:15.621649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.213 [2024-07-15 20:40:15.621664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.213 [2024-07-15 20:40:15.621677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.213 [2024-07-15 20:40:15.621707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.213 qpair failed and we were unable to recover it. 00:34:37.213 [2024-07-15 20:40:15.631534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.213 [2024-07-15 20:40:15.631696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.213 [2024-07-15 20:40:15.631723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.213 [2024-07-15 20:40:15.631738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.213 [2024-07-15 20:40:15.631751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.213 [2024-07-15 20:40:15.631781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.213 qpair failed and we were unable to recover it. 00:34:37.213 [2024-07-15 20:40:15.641455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.213 [2024-07-15 20:40:15.641620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.213 [2024-07-15 20:40:15.641646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.213 [2024-07-15 20:40:15.641662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.213 [2024-07-15 20:40:15.641675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.213 [2024-07-15 20:40:15.641704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.213 qpair failed and we were unable to recover it. 00:34:37.213 [2024-07-15 20:40:15.651554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.213 [2024-07-15 20:40:15.651731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.213 [2024-07-15 20:40:15.651759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.213 [2024-07-15 20:40:15.651799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.213 [2024-07-15 20:40:15.651813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.213 [2024-07-15 20:40:15.651857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.213 qpair failed and we were unable to recover it. 00:34:37.213 [2024-07-15 20:40:15.661605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.213 [2024-07-15 20:40:15.661749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.213 [2024-07-15 20:40:15.661790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.213 [2024-07-15 20:40:15.661805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.213 [2024-07-15 20:40:15.661818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.213 [2024-07-15 20:40:15.661861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.213 qpair failed and we were unable to recover it. 00:34:37.213 [2024-07-15 20:40:15.671566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.213 [2024-07-15 20:40:15.671758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.213 [2024-07-15 20:40:15.671799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.213 [2024-07-15 20:40:15.671814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.213 [2024-07-15 20:40:15.671827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.213 [2024-07-15 20:40:15.671856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.213 qpair failed and we were unable to recover it. 00:34:37.213 [2024-07-15 20:40:15.681585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.213 [2024-07-15 20:40:15.681765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.213 [2024-07-15 20:40:15.681791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.213 [2024-07-15 20:40:15.681807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.213 [2024-07-15 20:40:15.681820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.213 [2024-07-15 20:40:15.681849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.213 qpair failed and we were unable to recover it. 00:34:37.213 [2024-07-15 20:40:15.691602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.213 [2024-07-15 20:40:15.691745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.213 [2024-07-15 20:40:15.691771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.213 [2024-07-15 20:40:15.691787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.213 [2024-07-15 20:40:15.691801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.213 [2024-07-15 20:40:15.691830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.213 qpair failed and we were unable to recover it. 00:34:37.213 [2024-07-15 20:40:15.701632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.213 [2024-07-15 20:40:15.701775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.213 [2024-07-15 20:40:15.701802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.213 [2024-07-15 20:40:15.701817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.213 [2024-07-15 20:40:15.701830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.213 [2024-07-15 20:40:15.701860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.213 qpair failed and we were unable to recover it. 00:34:37.213 [2024-07-15 20:40:15.711688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.213 [2024-07-15 20:40:15.711870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.213 [2024-07-15 20:40:15.711905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.213 [2024-07-15 20:40:15.711922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.214 [2024-07-15 20:40:15.711936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.214 [2024-07-15 20:40:15.711966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.214 qpair failed and we were unable to recover it. 00:34:37.214 [2024-07-15 20:40:15.721696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.214 [2024-07-15 20:40:15.721842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.214 [2024-07-15 20:40:15.721869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.214 [2024-07-15 20:40:15.721896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.214 [2024-07-15 20:40:15.721911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.214 [2024-07-15 20:40:15.721942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.214 qpair failed and we were unable to recover it. 00:34:37.214 [2024-07-15 20:40:15.731715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.214 [2024-07-15 20:40:15.731865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.214 [2024-07-15 20:40:15.731901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.214 [2024-07-15 20:40:15.731923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.214 [2024-07-15 20:40:15.731936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.214 [2024-07-15 20:40:15.731967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.214 qpair failed and we were unable to recover it. 00:34:37.473 [2024-07-15 20:40:15.741777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.473 [2024-07-15 20:40:15.741939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.473 [2024-07-15 20:40:15.741966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.473 [2024-07-15 20:40:15.741987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.473 [2024-07-15 20:40:15.742003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.473 [2024-07-15 20:40:15.742046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.473 qpair failed and we were unable to recover it. 00:34:37.473 [2024-07-15 20:40:15.751813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.473 [2024-07-15 20:40:15.751970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.473 [2024-07-15 20:40:15.751996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.473 [2024-07-15 20:40:15.752011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.473 [2024-07-15 20:40:15.752024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.473 [2024-07-15 20:40:15.752054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.473 qpair failed and we were unable to recover it. 00:34:37.473 [2024-07-15 20:40:15.761805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.473 [2024-07-15 20:40:15.761960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.473 [2024-07-15 20:40:15.761986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.473 [2024-07-15 20:40:15.762001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.473 [2024-07-15 20:40:15.762014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.473 [2024-07-15 20:40:15.762045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.473 qpair failed and we were unable to recover it. 00:34:37.473 [2024-07-15 20:40:15.771850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.473 [2024-07-15 20:40:15.772050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.473 [2024-07-15 20:40:15.772077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.473 [2024-07-15 20:40:15.772091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.473 [2024-07-15 20:40:15.772104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.473 [2024-07-15 20:40:15.772134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.473 qpair failed and we were unable to recover it. 00:34:37.473 [2024-07-15 20:40:15.781867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.473 [2024-07-15 20:40:15.782043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.473 [2024-07-15 20:40:15.782069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.473 [2024-07-15 20:40:15.782084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.473 [2024-07-15 20:40:15.782097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.473 [2024-07-15 20:40:15.782127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.473 qpair failed and we were unable to recover it. 00:34:37.473 [2024-07-15 20:40:15.791916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.473 [2024-07-15 20:40:15.792070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.473 [2024-07-15 20:40:15.792095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.473 [2024-07-15 20:40:15.792111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.473 [2024-07-15 20:40:15.792124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.473 [2024-07-15 20:40:15.792153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.473 qpair failed and we were unable to recover it. 00:34:37.473 [2024-07-15 20:40:15.801958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.473 [2024-07-15 20:40:15.802120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.473 [2024-07-15 20:40:15.802146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.473 [2024-07-15 20:40:15.802160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.473 [2024-07-15 20:40:15.802173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.473 [2024-07-15 20:40:15.802203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.473 qpair failed and we were unable to recover it. 00:34:37.473 [2024-07-15 20:40:15.811960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.473 [2024-07-15 20:40:15.812112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.473 [2024-07-15 20:40:15.812139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.473 [2024-07-15 20:40:15.812154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.473 [2024-07-15 20:40:15.812167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.473 [2024-07-15 20:40:15.812197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.473 qpair failed and we were unable to recover it. 00:34:37.473 [2024-07-15 20:40:15.822016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.473 [2024-07-15 20:40:15.822164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.473 [2024-07-15 20:40:15.822190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.473 [2024-07-15 20:40:15.822206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.473 [2024-07-15 20:40:15.822218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.473 [2024-07-15 20:40:15.822263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.473 qpair failed and we were unable to recover it. 00:34:37.473 [2024-07-15 20:40:15.832077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.473 [2024-07-15 20:40:15.832225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.473 [2024-07-15 20:40:15.832256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.473 [2024-07-15 20:40:15.832272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.473 [2024-07-15 20:40:15.832286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.473 [2024-07-15 20:40:15.832318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.473 qpair failed and we were unable to recover it. 00:34:37.473 [2024-07-15 20:40:15.842039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.473 [2024-07-15 20:40:15.842190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.473 [2024-07-15 20:40:15.842215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.473 [2024-07-15 20:40:15.842230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.473 [2024-07-15 20:40:15.842242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.473 [2024-07-15 20:40:15.842272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.473 qpair failed and we were unable to recover it. 00:34:37.473 [2024-07-15 20:40:15.852124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.473 [2024-07-15 20:40:15.852321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.473 [2024-07-15 20:40:15.852361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.473 [2024-07-15 20:40:15.852375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.473 [2024-07-15 20:40:15.852387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.473 [2024-07-15 20:40:15.852432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.473 qpair failed and we were unable to recover it. 00:34:37.473 [2024-07-15 20:40:15.862135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.473 [2024-07-15 20:40:15.862278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.473 [2024-07-15 20:40:15.862304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.473 [2024-07-15 20:40:15.862319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.473 [2024-07-15 20:40:15.862332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.473 [2024-07-15 20:40:15.862361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.473 qpair failed and we were unable to recover it. 00:34:37.474 [2024-07-15 20:40:15.872221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.474 [2024-07-15 20:40:15.872372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.474 [2024-07-15 20:40:15.872398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.474 [2024-07-15 20:40:15.872413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.474 [2024-07-15 20:40:15.872426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.474 [2024-07-15 20:40:15.872463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.474 qpair failed and we were unable to recover it. 00:34:37.474 [2024-07-15 20:40:15.882200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.474 [2024-07-15 20:40:15.882347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.474 [2024-07-15 20:40:15.882373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.474 [2024-07-15 20:40:15.882388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.474 [2024-07-15 20:40:15.882400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.474 [2024-07-15 20:40:15.882431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.474 qpair failed and we were unable to recover it. 00:34:37.474 [2024-07-15 20:40:15.892205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.474 [2024-07-15 20:40:15.892348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.474 [2024-07-15 20:40:15.892374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.474 [2024-07-15 20:40:15.892389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.474 [2024-07-15 20:40:15.892403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.474 [2024-07-15 20:40:15.892447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.474 qpair failed and we were unable to recover it. 00:34:37.474 [2024-07-15 20:40:15.902212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.474 [2024-07-15 20:40:15.902395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.474 [2024-07-15 20:40:15.902421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.474 [2024-07-15 20:40:15.902437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.474 [2024-07-15 20:40:15.902450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.474 [2024-07-15 20:40:15.902480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.474 qpair failed and we were unable to recover it. 00:34:37.474 [2024-07-15 20:40:15.912241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.474 [2024-07-15 20:40:15.912391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.474 [2024-07-15 20:40:15.912418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.474 [2024-07-15 20:40:15.912433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.474 [2024-07-15 20:40:15.912446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.474 [2024-07-15 20:40:15.912475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.474 qpair failed and we were unable to recover it. 00:34:37.474 [2024-07-15 20:40:15.922254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.474 [2024-07-15 20:40:15.922394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.474 [2024-07-15 20:40:15.922425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.474 [2024-07-15 20:40:15.922441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.474 [2024-07-15 20:40:15.922454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.474 [2024-07-15 20:40:15.922498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.474 qpair failed and we were unable to recover it. 00:34:37.474 [2024-07-15 20:40:15.932358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.474 [2024-07-15 20:40:15.932503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.474 [2024-07-15 20:40:15.932530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.474 [2024-07-15 20:40:15.932546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.474 [2024-07-15 20:40:15.932559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.474 [2024-07-15 20:40:15.932627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.474 qpair failed and we were unable to recover it. 00:34:37.474 [2024-07-15 20:40:15.942343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.474 [2024-07-15 20:40:15.942485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.474 [2024-07-15 20:40:15.942511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.474 [2024-07-15 20:40:15.942525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.474 [2024-07-15 20:40:15.942539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.474 [2024-07-15 20:40:15.942584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.474 qpair failed and we were unable to recover it. 00:34:37.474 [2024-07-15 20:40:15.952353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.474 [2024-07-15 20:40:15.952515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.474 [2024-07-15 20:40:15.952541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.474 [2024-07-15 20:40:15.952557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.474 [2024-07-15 20:40:15.952570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.474 [2024-07-15 20:40:15.952600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.474 qpair failed and we were unable to recover it. 00:34:37.474 [2024-07-15 20:40:15.962397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.474 [2024-07-15 20:40:15.962548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.474 [2024-07-15 20:40:15.962574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.474 [2024-07-15 20:40:15.962589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.474 [2024-07-15 20:40:15.962624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.474 [2024-07-15 20:40:15.962656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.474 qpair failed and we were unable to recover it. 00:34:37.474 [2024-07-15 20:40:15.972454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.474 [2024-07-15 20:40:15.972623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.474 [2024-07-15 20:40:15.972649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.474 [2024-07-15 20:40:15.972679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.474 [2024-07-15 20:40:15.972694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.474 [2024-07-15 20:40:15.972752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.474 qpair failed and we were unable to recover it. 00:34:37.474 [2024-07-15 20:40:15.982454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.474 [2024-07-15 20:40:15.982596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.474 [2024-07-15 20:40:15.982622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.474 [2024-07-15 20:40:15.982637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.474 [2024-07-15 20:40:15.982651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.474 [2024-07-15 20:40:15.982683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.474 qpair failed and we were unable to recover it. 00:34:37.474 [2024-07-15 20:40:15.992490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.474 [2024-07-15 20:40:15.992674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.474 [2024-07-15 20:40:15.992715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.474 [2024-07-15 20:40:15.992731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.474 [2024-07-15 20:40:15.992744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.474 [2024-07-15 20:40:15.992788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.474 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 20:40:16.002647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.734 [2024-07-15 20:40:16.002798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.734 [2024-07-15 20:40:16.002839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.734 [2024-07-15 20:40:16.002854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.734 [2024-07-15 20:40:16.002867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.734 [2024-07-15 20:40:16.002928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.012615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.012776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.012805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.012824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.012838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.012868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.022545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.022691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.022717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.022732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.022745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.022775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.032585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.032730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.032756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.032771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.032785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.032831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.042637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.042812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.042838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.042853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.042866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.042906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.052624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.052770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.052796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.052814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.052834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.052893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.062683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.062870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.062905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.062921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.062935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.062965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.072702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.072905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.072931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.072946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.072960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.072990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.082808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.082977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.083003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.083018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.083032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.083063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.092751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.092915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.092941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.092956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.092969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.092999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.102792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.102949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.102975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.102990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.103004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.103034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.112824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.112995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.113021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.113036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.113050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.113079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.122864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.123025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.123051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.123066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.123080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.123122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.132846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.133007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.133033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.133048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.133061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.133091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.142881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.143039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.735 [2024-07-15 20:40:16.143064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.735 [2024-07-15 20:40:16.143086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.735 [2024-07-15 20:40:16.143101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.735 [2024-07-15 20:40:16.143130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 20:40:16.152975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.735 [2024-07-15 20:40:16.153155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.736 [2024-07-15 20:40:16.153180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.736 [2024-07-15 20:40:16.153195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.736 [2024-07-15 20:40:16.153209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.736 [2024-07-15 20:40:16.153239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 20:40:16.162949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.736 [2024-07-15 20:40:16.163137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.736 [2024-07-15 20:40:16.163162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.736 [2024-07-15 20:40:16.163177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.736 [2024-07-15 20:40:16.163191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.736 [2024-07-15 20:40:16.163220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 20:40:16.172984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.736 [2024-07-15 20:40:16.173131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.736 [2024-07-15 20:40:16.173157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.736 [2024-07-15 20:40:16.173172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.736 [2024-07-15 20:40:16.173185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.736 [2024-07-15 20:40:16.173214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 20:40:16.183032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.736 [2024-07-15 20:40:16.183221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.736 [2024-07-15 20:40:16.183247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.736 [2024-07-15 20:40:16.183262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.736 [2024-07-15 20:40:16.183275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.736 [2024-07-15 20:40:16.183304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 20:40:16.193038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.736 [2024-07-15 20:40:16.193194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.736 [2024-07-15 20:40:16.193219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.736 [2024-07-15 20:40:16.193234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.736 [2024-07-15 20:40:16.193248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.736 [2024-07-15 20:40:16.193277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 20:40:16.203063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.736 [2024-07-15 20:40:16.203211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.736 [2024-07-15 20:40:16.203237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.736 [2024-07-15 20:40:16.203251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.736 [2024-07-15 20:40:16.203265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.736 [2024-07-15 20:40:16.203296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 20:40:16.213225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.736 [2024-07-15 20:40:16.213391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.736 [2024-07-15 20:40:16.213416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.736 [2024-07-15 20:40:16.213431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.736 [2024-07-15 20:40:16.213445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.736 [2024-07-15 20:40:16.213490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 20:40:16.223139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.736 [2024-07-15 20:40:16.223308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.736 [2024-07-15 20:40:16.223333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.736 [2024-07-15 20:40:16.223348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.736 [2024-07-15 20:40:16.223362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.736 [2024-07-15 20:40:16.223391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 20:40:16.233207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.736 [2024-07-15 20:40:16.233363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.736 [2024-07-15 20:40:16.233394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.736 [2024-07-15 20:40:16.233410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.736 [2024-07-15 20:40:16.233423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.736 [2024-07-15 20:40:16.233453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 20:40:16.243187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.736 [2024-07-15 20:40:16.243381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.736 [2024-07-15 20:40:16.243407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.736 [2024-07-15 20:40:16.243422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.736 [2024-07-15 20:40:16.243436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.736 [2024-07-15 20:40:16.243465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 20:40:16.253192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.736 [2024-07-15 20:40:16.253336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.736 [2024-07-15 20:40:16.253362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.736 [2024-07-15 20:40:16.253377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.736 [2024-07-15 20:40:16.253391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.736 [2024-07-15 20:40:16.253434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.996 [2024-07-15 20:40:16.263260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.996 [2024-07-15 20:40:16.263437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.996 [2024-07-15 20:40:16.263461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.996 [2024-07-15 20:40:16.263476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.996 [2024-07-15 20:40:16.263490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.996 [2024-07-15 20:40:16.263520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.996 qpair failed and we were unable to recover it. 00:34:37.996 [2024-07-15 20:40:16.273334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.996 [2024-07-15 20:40:16.273530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.996 [2024-07-15 20:40:16.273555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.996 [2024-07-15 20:40:16.273586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.996 [2024-07-15 20:40:16.273600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.996 [2024-07-15 20:40:16.273650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.996 qpair failed and we were unable to recover it. 00:34:37.996 [2024-07-15 20:40:16.283322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.996 [2024-07-15 20:40:16.283482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.996 [2024-07-15 20:40:16.283510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.996 [2024-07-15 20:40:16.283529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.996 [2024-07-15 20:40:16.283559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.996 [2024-07-15 20:40:16.283590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.996 qpair failed and we were unable to recover it. 00:34:37.996 [2024-07-15 20:40:16.293327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.996 [2024-07-15 20:40:16.293478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.996 [2024-07-15 20:40:16.293504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.996 [2024-07-15 20:40:16.293518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.996 [2024-07-15 20:40:16.293533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.996 [2024-07-15 20:40:16.293578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.996 qpair failed and we were unable to recover it. 00:34:37.996 [2024-07-15 20:40:16.303373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.996 [2024-07-15 20:40:16.303551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.996 [2024-07-15 20:40:16.303577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.996 [2024-07-15 20:40:16.303592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.996 [2024-07-15 20:40:16.303605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.996 [2024-07-15 20:40:16.303636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.996 qpair failed and we were unable to recover it. 00:34:37.996 [2024-07-15 20:40:16.313495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.996 [2024-07-15 20:40:16.313648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.996 [2024-07-15 20:40:16.313674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.996 [2024-07-15 20:40:16.313689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.996 [2024-07-15 20:40:16.313702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.996 [2024-07-15 20:40:16.313732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.996 qpair failed and we were unable to recover it. 00:34:37.996 [2024-07-15 20:40:16.323410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.996 [2024-07-15 20:40:16.323562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.996 [2024-07-15 20:40:16.323593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.996 [2024-07-15 20:40:16.323609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.996 [2024-07-15 20:40:16.323623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.996 [2024-07-15 20:40:16.323668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.996 qpair failed and we were unable to recover it. 00:34:37.996 [2024-07-15 20:40:16.333414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.996 [2024-07-15 20:40:16.333564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.996 [2024-07-15 20:40:16.333590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.996 [2024-07-15 20:40:16.333605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.996 [2024-07-15 20:40:16.333619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.996 [2024-07-15 20:40:16.333648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.996 qpair failed and we were unable to recover it. 00:34:37.996 [2024-07-15 20:40:16.343471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.996 [2024-07-15 20:40:16.343618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.996 [2024-07-15 20:40:16.343644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.996 [2024-07-15 20:40:16.343659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.996 [2024-07-15 20:40:16.343672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.996 [2024-07-15 20:40:16.343715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.996 qpair failed and we were unable to recover it. 00:34:37.996 [2024-07-15 20:40:16.353477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.996 [2024-07-15 20:40:16.353630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.996 [2024-07-15 20:40:16.353655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.996 [2024-07-15 20:40:16.353670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.996 [2024-07-15 20:40:16.353684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.996 [2024-07-15 20:40:16.353713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.996 qpair failed and we were unable to recover it. 00:34:37.996 [2024-07-15 20:40:16.363498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.996 [2024-07-15 20:40:16.363669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.996 [2024-07-15 20:40:16.363695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.996 [2024-07-15 20:40:16.363710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.996 [2024-07-15 20:40:16.363724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.996 [2024-07-15 20:40:16.363760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.996 qpair failed and we were unable to recover it. 00:34:37.996 [2024-07-15 20:40:16.373515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.996 [2024-07-15 20:40:16.373667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.996 [2024-07-15 20:40:16.373693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.373708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.373722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.373753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.383565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.383748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.383774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.383788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.383801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.383829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.393620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.393800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.393827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.393843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.393874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.393934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.403619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.403774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.403799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.403814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.403828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.403874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.413632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.413780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.413806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.413821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.413835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.413864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.423676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.423882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.423908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.423923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.423937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.423967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.433699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.433862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.433895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.433911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.433924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.433954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.443800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.443992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.444019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.444033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.444047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.444077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.453743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.453900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.453926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.453941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.453962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.453994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.463770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.463932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.463957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.463972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.463986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.464016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.473824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.474036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.474061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.474076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.474089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.474118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.483902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.484076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.484101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.484115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.484129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.484159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.493862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.494012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.494038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.494053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.494066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.494097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.503917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.504109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.504134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.504149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.997 [2024-07-15 20:40:16.504163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.997 [2024-07-15 20:40:16.504193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.997 qpair failed and we were unable to recover it. 00:34:37.997 [2024-07-15 20:40:16.513963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.997 [2024-07-15 20:40:16.514125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.997 [2024-07-15 20:40:16.514151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.997 [2024-07-15 20:40:16.514166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.998 [2024-07-15 20:40:16.514180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.998 [2024-07-15 20:40:16.514225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.998 qpair failed and we were unable to recover it. 00:34:37.998 [2024-07-15 20:40:16.523975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.998 [2024-07-15 20:40:16.524141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.998 [2024-07-15 20:40:16.524167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.998 [2024-07-15 20:40:16.524182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.998 [2024-07-15 20:40:16.524195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:37.998 [2024-07-15 20:40:16.524225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.998 qpair failed and we were unable to recover it. 00:34:38.257 [2024-07-15 20:40:16.533981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.257 [2024-07-15 20:40:16.534143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.257 [2024-07-15 20:40:16.534169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.257 [2024-07-15 20:40:16.534184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.257 [2024-07-15 20:40:16.534198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.257 [2024-07-15 20:40:16.534227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.257 qpair failed and we were unable to recover it. 00:34:38.257 [2024-07-15 20:40:16.544035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.257 [2024-07-15 20:40:16.544218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.257 [2024-07-15 20:40:16.544244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.257 [2024-07-15 20:40:16.544265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.257 [2024-07-15 20:40:16.544280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.257 [2024-07-15 20:40:16.544324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.257 qpair failed and we were unable to recover it. 00:34:38.257 [2024-07-15 20:40:16.554088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.257 [2024-07-15 20:40:16.554260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.257 [2024-07-15 20:40:16.554285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.257 [2024-07-15 20:40:16.554300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.257 [2024-07-15 20:40:16.554314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.257 [2024-07-15 20:40:16.554344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.257 qpair failed and we were unable to recover it. 00:34:38.257 [2024-07-15 20:40:16.564104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.257 [2024-07-15 20:40:16.564301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.257 [2024-07-15 20:40:16.564328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.257 [2024-07-15 20:40:16.564342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.257 [2024-07-15 20:40:16.564355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.257 [2024-07-15 20:40:16.564385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.257 qpair failed and we were unable to recover it. 00:34:38.257 [2024-07-15 20:40:16.574129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.257 [2024-07-15 20:40:16.574282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.257 [2024-07-15 20:40:16.574307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.257 [2024-07-15 20:40:16.574322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.257 [2024-07-15 20:40:16.574336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.257 [2024-07-15 20:40:16.574365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.257 qpair failed and we were unable to recover it. 00:34:38.257 [2024-07-15 20:40:16.584154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.257 [2024-07-15 20:40:16.584303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.257 [2024-07-15 20:40:16.584329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.257 [2024-07-15 20:40:16.584345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.257 [2024-07-15 20:40:16.584359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.257 [2024-07-15 20:40:16.584404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.257 qpair failed and we were unable to recover it. 00:34:38.257 [2024-07-15 20:40:16.594278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.257 [2024-07-15 20:40:16.594431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.257 [2024-07-15 20:40:16.594457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.257 [2024-07-15 20:40:16.594472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.257 [2024-07-15 20:40:16.594486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.257 [2024-07-15 20:40:16.594531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.257 qpair failed and we were unable to recover it. 00:34:38.257 [2024-07-15 20:40:16.604239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.257 [2024-07-15 20:40:16.604395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.604424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.604442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.604455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.604501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.614235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.614412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.614437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.614452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.614481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.614511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.624281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.624433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.624459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.624474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.624488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.624518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.634294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.634445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.634476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.634492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.634506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.634536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.644346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.644500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.644526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.644542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.644555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.644600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.654458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.654629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.654655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.654669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.654683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.654727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.664365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.664505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.664532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.664547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.664560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.664591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.674431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.674584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.674610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.674625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.674639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.674675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.684439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.684589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.684614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.684629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.684643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.684673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.694444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.694588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.694614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.694628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.694642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.694672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.704512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.704704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.704744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.704760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.704774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.704817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.714541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.714708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.714733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.714748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.714762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.714792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.724582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.724755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.724786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.724802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.724816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.724846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.734575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.734751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.734776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.734806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.258 [2024-07-15 20:40:16.734820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.258 [2024-07-15 20:40:16.734864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.258 qpair failed and we were unable to recover it. 00:34:38.258 [2024-07-15 20:40:16.744608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.258 [2024-07-15 20:40:16.744758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.258 [2024-07-15 20:40:16.744784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.258 [2024-07-15 20:40:16.744799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.259 [2024-07-15 20:40:16.744812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.259 [2024-07-15 20:40:16.744842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.259 qpair failed and we were unable to recover it. 00:34:38.259 [2024-07-15 20:40:16.754682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.259 [2024-07-15 20:40:16.754893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.259 [2024-07-15 20:40:16.754919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.259 [2024-07-15 20:40:16.754934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.259 [2024-07-15 20:40:16.754948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.259 [2024-07-15 20:40:16.754978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.259 qpair failed and we were unable to recover it. 00:34:38.259 [2024-07-15 20:40:16.764649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.259 [2024-07-15 20:40:16.764800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.259 [2024-07-15 20:40:16.764826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.259 [2024-07-15 20:40:16.764840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.259 [2024-07-15 20:40:16.764854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.259 [2024-07-15 20:40:16.764898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.259 qpair failed and we were unable to recover it. 00:34:38.259 [2024-07-15 20:40:16.774691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.259 [2024-07-15 20:40:16.774860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.259 [2024-07-15 20:40:16.774894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.259 [2024-07-15 20:40:16.774910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.259 [2024-07-15 20:40:16.774924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.259 [2024-07-15 20:40:16.774953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.259 qpair failed and we were unable to recover it. 00:34:38.259 [2024-07-15 20:40:16.784743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.259 [2024-07-15 20:40:16.784954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.259 [2024-07-15 20:40:16.784980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.259 [2024-07-15 20:40:16.784995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.259 [2024-07-15 20:40:16.785009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.259 [2024-07-15 20:40:16.785038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.259 qpair failed and we were unable to recover it. 00:34:38.517 [2024-07-15 20:40:16.794785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.517 [2024-07-15 20:40:16.794945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.517 [2024-07-15 20:40:16.794971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.517 [2024-07-15 20:40:16.794985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.794999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.795029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.804765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.804918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.804944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.518 [2024-07-15 20:40:16.804959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.804972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.805002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.814794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.815062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.815092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.518 [2024-07-15 20:40:16.815108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.815121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.815164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.824827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.825017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.825043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.518 [2024-07-15 20:40:16.825058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.825071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.825100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.834865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.835025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.835051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.518 [2024-07-15 20:40:16.835067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.835080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.835110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.844919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.845076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.845102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.518 [2024-07-15 20:40:16.845117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.845130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.845161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.854920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.855064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.855089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.518 [2024-07-15 20:40:16.855104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.855124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.855155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.864954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.865110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.865137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.518 [2024-07-15 20:40:16.865152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.865168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.865214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.874976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.875126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.875153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.518 [2024-07-15 20:40:16.875167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.875181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.875211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.884994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.885140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.885166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.518 [2024-07-15 20:40:16.885180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.885194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.885224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.895038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.895192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.895218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.518 [2024-07-15 20:40:16.895233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.895246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.895294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.905070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.905215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.905242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.518 [2024-07-15 20:40:16.905256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.905270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.905316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.915092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.915277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.915303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.518 [2024-07-15 20:40:16.915318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.915332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.915361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.925112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.925250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.925275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.518 [2024-07-15 20:40:16.925290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.518 [2024-07-15 20:40:16.925314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.518 [2024-07-15 20:40:16.925344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.518 qpair failed and we were unable to recover it. 00:34:38.518 [2024-07-15 20:40:16.935214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.518 [2024-07-15 20:40:16.935391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.518 [2024-07-15 20:40:16.935441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.519 [2024-07-15 20:40:16.935456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.519 [2024-07-15 20:40:16.935470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.519 [2024-07-15 20:40:16.935513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.519 qpair failed and we were unable to recover it. 00:34:38.519 [2024-07-15 20:40:16.945180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.519 [2024-07-15 20:40:16.945343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.519 [2024-07-15 20:40:16.945368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.519 [2024-07-15 20:40:16.945389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.519 [2024-07-15 20:40:16.945405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.519 [2024-07-15 20:40:16.945436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.519 qpair failed and we were unable to recover it. 00:34:38.519 [2024-07-15 20:40:16.955250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.519 [2024-07-15 20:40:16.955408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.519 [2024-07-15 20:40:16.955433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.519 [2024-07-15 20:40:16.955448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.519 [2024-07-15 20:40:16.955477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.519 [2024-07-15 20:40:16.955508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.519 qpair failed and we were unable to recover it. 00:34:38.519 [2024-07-15 20:40:16.965238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.519 [2024-07-15 20:40:16.965392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.519 [2024-07-15 20:40:16.965430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.519 [2024-07-15 20:40:16.965445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.519 [2024-07-15 20:40:16.965473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.519 [2024-07-15 20:40:16.965503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.519 qpair failed and we were unable to recover it. 00:34:38.519 [2024-07-15 20:40:16.975292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.519 [2024-07-15 20:40:16.975473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.519 [2024-07-15 20:40:16.975500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.519 [2024-07-15 20:40:16.975514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.519 [2024-07-15 20:40:16.975528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.519 [2024-07-15 20:40:16.975557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.519 qpair failed and we were unable to recover it. 00:34:38.519 [2024-07-15 20:40:16.985280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.519 [2024-07-15 20:40:16.985435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.519 [2024-07-15 20:40:16.985461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.519 [2024-07-15 20:40:16.985475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.519 [2024-07-15 20:40:16.985489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.519 [2024-07-15 20:40:16.985519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.519 qpair failed and we were unable to recover it. 00:34:38.519 [2024-07-15 20:40:16.995407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.519 [2024-07-15 20:40:16.995555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.519 [2024-07-15 20:40:16.995581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.519 [2024-07-15 20:40:16.995596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.519 [2024-07-15 20:40:16.995610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.519 [2024-07-15 20:40:16.995639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.519 qpair failed and we were unable to recover it. 00:34:38.519 [2024-07-15 20:40:17.005382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.519 [2024-07-15 20:40:17.005580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.519 [2024-07-15 20:40:17.005606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.519 [2024-07-15 20:40:17.005621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.519 [2024-07-15 20:40:17.005634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.519 [2024-07-15 20:40:17.005664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.519 qpair failed and we were unable to recover it. 00:34:38.519 [2024-07-15 20:40:17.015376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.519 [2024-07-15 20:40:17.015524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.519 [2024-07-15 20:40:17.015549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.519 [2024-07-15 20:40:17.015564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.519 [2024-07-15 20:40:17.015578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.519 [2024-07-15 20:40:17.015608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.519 qpair failed and we were unable to recover it. 00:34:38.519 [2024-07-15 20:40:17.025401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.519 [2024-07-15 20:40:17.025539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.519 [2024-07-15 20:40:17.025565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.519 [2024-07-15 20:40:17.025581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.519 [2024-07-15 20:40:17.025594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.519 [2024-07-15 20:40:17.025625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.519 qpair failed and we were unable to recover it. 00:34:38.519 [2024-07-15 20:40:17.035460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.519 [2024-07-15 20:40:17.035607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.519 [2024-07-15 20:40:17.035633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.519 [2024-07-15 20:40:17.035653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.519 [2024-07-15 20:40:17.035667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.519 [2024-07-15 20:40:17.035697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.519 qpair failed and we were unable to recover it. 00:34:38.519 [2024-07-15 20:40:17.045522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.519 [2024-07-15 20:40:17.045688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.519 [2024-07-15 20:40:17.045715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.519 [2024-07-15 20:40:17.045729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.519 [2024-07-15 20:40:17.045742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.519 [2024-07-15 20:40:17.045787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.519 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-15 20:40:17.055575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.778 [2024-07-15 20:40:17.055728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.778 [2024-07-15 20:40:17.055755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.778 [2024-07-15 20:40:17.055770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.778 [2024-07-15 20:40:17.055783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.778 [2024-07-15 20:40:17.055812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-15 20:40:17.065538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.778 [2024-07-15 20:40:17.065684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.778 [2024-07-15 20:40:17.065710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.778 [2024-07-15 20:40:17.065725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.778 [2024-07-15 20:40:17.065738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.778 [2024-07-15 20:40:17.065768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-15 20:40:17.075557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.778 [2024-07-15 20:40:17.075707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.778 [2024-07-15 20:40:17.075733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.778 [2024-07-15 20:40:17.075749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.778 [2024-07-15 20:40:17.075762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.778 [2024-07-15 20:40:17.075791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-15 20:40:17.085570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.778 [2024-07-15 20:40:17.085727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.778 [2024-07-15 20:40:17.085754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.778 [2024-07-15 20:40:17.085770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.778 [2024-07-15 20:40:17.085783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.778 [2024-07-15 20:40:17.085811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-15 20:40:17.095661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.778 [2024-07-15 20:40:17.095827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.778 [2024-07-15 20:40:17.095854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.778 [2024-07-15 20:40:17.095869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.778 [2024-07-15 20:40:17.095889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.778 [2024-07-15 20:40:17.095920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-15 20:40:17.105731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.779 [2024-07-15 20:40:17.105870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.779 [2024-07-15 20:40:17.105905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.779 [2024-07-15 20:40:17.105920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.779 [2024-07-15 20:40:17.105934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.779 [2024-07-15 20:40:17.105964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-15 20:40:17.115673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.779 [2024-07-15 20:40:17.115853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.779 [2024-07-15 20:40:17.115890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.779 [2024-07-15 20:40:17.115909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.779 [2024-07-15 20:40:17.115931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.779 [2024-07-15 20:40:17.115960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-15 20:40:17.125746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.779 [2024-07-15 20:40:17.125899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.779 [2024-07-15 20:40:17.125930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.779 [2024-07-15 20:40:17.125947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.779 [2024-07-15 20:40:17.125960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.779 [2024-07-15 20:40:17.125991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-15 20:40:17.135717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.779 [2024-07-15 20:40:17.135868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.779 [2024-07-15 20:40:17.135902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.779 [2024-07-15 20:40:17.135918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.779 [2024-07-15 20:40:17.135932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.779 [2024-07-15 20:40:17.135962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-15 20:40:17.145737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.779 [2024-07-15 20:40:17.145889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.779 [2024-07-15 20:40:17.145915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.779 [2024-07-15 20:40:17.145929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.779 [2024-07-15 20:40:17.145942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.779 [2024-07-15 20:40:17.145973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-15 20:40:17.155765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.779 [2024-07-15 20:40:17.155921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.779 [2024-07-15 20:40:17.155947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.779 [2024-07-15 20:40:17.155962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.779 [2024-07-15 20:40:17.155975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.779 [2024-07-15 20:40:17.156005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-15 20:40:17.165863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.779 [2024-07-15 20:40:17.166078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.779 [2024-07-15 20:40:17.166105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.779 [2024-07-15 20:40:17.166119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.779 [2024-07-15 20:40:17.166131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.779 [2024-07-15 20:40:17.166168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-15 20:40:17.175916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.779 [2024-07-15 20:40:17.176076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.779 [2024-07-15 20:40:17.176102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.779 [2024-07-15 20:40:17.176118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.779 [2024-07-15 20:40:17.176132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.779 [2024-07-15 20:40:17.176161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-15 20:40:17.185873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.779 [2024-07-15 20:40:17.186028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.779 [2024-07-15 20:40:17.186059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.779 [2024-07-15 20:40:17.186076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.779 [2024-07-15 20:40:17.186090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.779 [2024-07-15 20:40:17.186123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-15 20:40:17.195918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.779 [2024-07-15 20:40:17.196070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.779 [2024-07-15 20:40:17.196098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.779 [2024-07-15 20:40:17.196113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.779 [2024-07-15 20:40:17.196126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.779 [2024-07-15 20:40:17.196156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-15 20:40:17.205912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.779 [2024-07-15 20:40:17.206055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.779 [2024-07-15 20:40:17.206080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.779 [2024-07-15 20:40:17.206096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.780 [2024-07-15 20:40:17.206110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.780 [2024-07-15 20:40:17.206140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-15 20:40:17.215933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.780 [2024-07-15 20:40:17.216073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.780 [2024-07-15 20:40:17.216104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.780 [2024-07-15 20:40:17.216120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.780 [2024-07-15 20:40:17.216135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.780 [2024-07-15 20:40:17.216165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-15 20:40:17.225969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.780 [2024-07-15 20:40:17.226111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.780 [2024-07-15 20:40:17.226137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.780 [2024-07-15 20:40:17.226152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.780 [2024-07-15 20:40:17.226165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.780 [2024-07-15 20:40:17.226196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-15 20:40:17.236038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.780 [2024-07-15 20:40:17.236211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.780 [2024-07-15 20:40:17.236240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.780 [2024-07-15 20:40:17.236256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.780 [2024-07-15 20:40:17.236270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.780 [2024-07-15 20:40:17.236301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-15 20:40:17.246047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.780 [2024-07-15 20:40:17.246208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.780 [2024-07-15 20:40:17.246235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.780 [2024-07-15 20:40:17.246250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.780 [2024-07-15 20:40:17.246263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.780 [2024-07-15 20:40:17.246293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-15 20:40:17.256057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.780 [2024-07-15 20:40:17.256205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.780 [2024-07-15 20:40:17.256232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.780 [2024-07-15 20:40:17.256247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.780 [2024-07-15 20:40:17.256266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.780 [2024-07-15 20:40:17.256297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-15 20:40:17.266060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.780 [2024-07-15 20:40:17.266202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.780 [2024-07-15 20:40:17.266228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.780 [2024-07-15 20:40:17.266244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.780 [2024-07-15 20:40:17.266257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.780 [2024-07-15 20:40:17.266286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-15 20:40:17.276126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.780 [2024-07-15 20:40:17.276288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.780 [2024-07-15 20:40:17.276312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.780 [2024-07-15 20:40:17.276326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.780 [2024-07-15 20:40:17.276340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.780 [2024-07-15 20:40:17.276371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-15 20:40:17.286119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.780 [2024-07-15 20:40:17.286265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.780 [2024-07-15 20:40:17.286291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.780 [2024-07-15 20:40:17.286306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.780 [2024-07-15 20:40:17.286320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.780 [2024-07-15 20:40:17.286350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-15 20:40:17.296166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.780 [2024-07-15 20:40:17.296316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.780 [2024-07-15 20:40:17.296343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.780 [2024-07-15 20:40:17.296358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.780 [2024-07-15 20:40:17.296375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.780 [2024-07-15 20:40:17.296405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-15 20:40:17.306195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.780 [2024-07-15 20:40:17.306368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.780 [2024-07-15 20:40:17.306394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.780 [2024-07-15 20:40:17.306409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.780 [2024-07-15 20:40:17.306423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:38.780 [2024-07-15 20:40:17.306454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:38.780 qpair failed and we were unable to recover it. 00:34:39.039 [2024-07-15 20:40:17.316209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.039 [2024-07-15 20:40:17.316363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.039 [2024-07-15 20:40:17.316389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.039 [2024-07-15 20:40:17.316404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.039 [2024-07-15 20:40:17.316418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.039 [2024-07-15 20:40:17.316449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.039 qpair failed and we were unable to recover it. 00:34:39.039 [2024-07-15 20:40:17.326257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.039 [2024-07-15 20:40:17.326403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.039 [2024-07-15 20:40:17.326430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.039 [2024-07-15 20:40:17.326445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.039 [2024-07-15 20:40:17.326459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.039 [2024-07-15 20:40:17.326502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.039 qpair failed and we were unable to recover it. 00:34:39.039 [2024-07-15 20:40:17.336264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.039 [2024-07-15 20:40:17.336411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.039 [2024-07-15 20:40:17.336438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.336453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.336466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.336496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.346333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.346497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.346526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.346546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.346561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.346606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.356322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.356478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.356504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.356520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.356533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.356563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.366366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.366511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.366538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.366554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.366567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.366597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.376464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.376610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.376637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.376652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.376665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.376695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.386413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.386566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.386592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.386608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.386621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.386650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.396460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.396635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.396661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.396676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.396690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.396720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.406505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.406660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.406687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.406702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.406715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.406744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.416490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.416668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.416694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.416709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.416723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.416753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.426509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.426671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.426697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.426712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.426742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.426772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.436573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.436753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.436777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.436797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.436810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.436840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.446542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.446683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.446710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.446725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.446739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.446768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.456595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.456764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.456791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.456821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.456834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.456889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.466624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.466788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.466814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.466829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.040 [2024-07-15 20:40:17.466843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.040 [2024-07-15 20:40:17.466873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.040 qpair failed and we were unable to recover it. 00:34:39.040 [2024-07-15 20:40:17.476654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.040 [2024-07-15 20:40:17.476803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.040 [2024-07-15 20:40:17.476829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.040 [2024-07-15 20:40:17.476844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.041 [2024-07-15 20:40:17.476857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.041 [2024-07-15 20:40:17.476915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.041 qpair failed and we were unable to recover it. 00:34:39.041 [2024-07-15 20:40:17.486744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.041 [2024-07-15 20:40:17.486897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.041 [2024-07-15 20:40:17.486923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.041 [2024-07-15 20:40:17.486938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.041 [2024-07-15 20:40:17.486951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.041 [2024-07-15 20:40:17.486981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.041 qpair failed and we were unable to recover it. 00:34:39.041 [2024-07-15 20:40:17.496693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.041 [2024-07-15 20:40:17.496835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.041 [2024-07-15 20:40:17.496862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.041 [2024-07-15 20:40:17.496885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.041 [2024-07-15 20:40:17.496901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.041 [2024-07-15 20:40:17.496932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.041 qpair failed and we were unable to recover it. 00:34:39.041 [2024-07-15 20:40:17.506739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.041 [2024-07-15 20:40:17.506911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.041 [2024-07-15 20:40:17.506939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.041 [2024-07-15 20:40:17.506954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.041 [2024-07-15 20:40:17.506970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.041 [2024-07-15 20:40:17.507001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.041 qpair failed and we were unable to recover it. 00:34:39.041 [2024-07-15 20:40:17.516741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.041 [2024-07-15 20:40:17.516898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.041 [2024-07-15 20:40:17.516925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.041 [2024-07-15 20:40:17.516940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.041 [2024-07-15 20:40:17.516953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.041 [2024-07-15 20:40:17.516984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.041 qpair failed and we were unable to recover it. 00:34:39.041 [2024-07-15 20:40:17.526752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.041 [2024-07-15 20:40:17.526904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.041 [2024-07-15 20:40:17.526936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.041 [2024-07-15 20:40:17.526952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.041 [2024-07-15 20:40:17.526966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.041 [2024-07-15 20:40:17.526996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.041 qpair failed and we were unable to recover it. 00:34:39.041 [2024-07-15 20:40:17.536772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.041 [2024-07-15 20:40:17.536924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.041 [2024-07-15 20:40:17.536951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.041 [2024-07-15 20:40:17.536966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.041 [2024-07-15 20:40:17.536979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.041 [2024-07-15 20:40:17.537009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.041 qpair failed and we were unable to recover it. 00:34:39.041 [2024-07-15 20:40:17.546840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.041 [2024-07-15 20:40:17.546990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.041 [2024-07-15 20:40:17.547016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.041 [2024-07-15 20:40:17.547032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.041 [2024-07-15 20:40:17.547045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.041 [2024-07-15 20:40:17.547075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.041 qpair failed and we were unable to recover it. 00:34:39.041 [2024-07-15 20:40:17.556859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.041 [2024-07-15 20:40:17.557019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.041 [2024-07-15 20:40:17.557045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.041 [2024-07-15 20:40:17.557060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.041 [2024-07-15 20:40:17.557074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.041 [2024-07-15 20:40:17.557103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.041 qpair failed and we were unable to recover it. 00:34:39.041 [2024-07-15 20:40:17.566958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.041 [2024-07-15 20:40:17.567175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.041 [2024-07-15 20:40:17.567215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.041 [2024-07-15 20:40:17.567229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.041 [2024-07-15 20:40:17.567241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.041 [2024-07-15 20:40:17.567276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.041 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-15 20:40:17.576946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.300 [2024-07-15 20:40:17.577094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.300 [2024-07-15 20:40:17.577120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.300 [2024-07-15 20:40:17.577135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.300 [2024-07-15 20:40:17.577149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.300 [2024-07-15 20:40:17.577178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-15 20:40:17.586985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.300 [2024-07-15 20:40:17.587167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.300 [2024-07-15 20:40:17.587193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.300 [2024-07-15 20:40:17.587208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.300 [2024-07-15 20:40:17.587221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.300 [2024-07-15 20:40:17.587251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-15 20:40:17.596985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.300 [2024-07-15 20:40:17.597138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.300 [2024-07-15 20:40:17.597164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.300 [2024-07-15 20:40:17.597179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.300 [2024-07-15 20:40:17.597191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.300 [2024-07-15 20:40:17.597220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-15 20:40:17.607367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.300 [2024-07-15 20:40:17.607516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.300 [2024-07-15 20:40:17.607543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.300 [2024-07-15 20:40:17.607558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.300 [2024-07-15 20:40:17.607571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.300 [2024-07-15 20:40:17.607600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-15 20:40:17.617120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.300 [2024-07-15 20:40:17.617322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.300 [2024-07-15 20:40:17.617354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.300 [2024-07-15 20:40:17.617369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.300 [2024-07-15 20:40:17.617383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.300 [2024-07-15 20:40:17.617413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-15 20:40:17.627100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.300 [2024-07-15 20:40:17.627242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.300 [2024-07-15 20:40:17.627269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.300 [2024-07-15 20:40:17.627284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.300 [2024-07-15 20:40:17.627298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.300 [2024-07-15 20:40:17.627327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-15 20:40:17.637167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.300 [2024-07-15 20:40:17.637317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.300 [2024-07-15 20:40:17.637343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.300 [2024-07-15 20:40:17.637358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.300 [2024-07-15 20:40:17.637371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.300 [2024-07-15 20:40:17.637415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-15 20:40:17.647210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.300 [2024-07-15 20:40:17.647372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.300 [2024-07-15 20:40:17.647399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.300 [2024-07-15 20:40:17.647414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.300 [2024-07-15 20:40:17.647427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.300 [2024-07-15 20:40:17.647471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-15 20:40:17.657152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.300 [2024-07-15 20:40:17.657292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.300 [2024-07-15 20:40:17.657318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.300 [2024-07-15 20:40:17.657333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.300 [2024-07-15 20:40:17.657352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.300 [2024-07-15 20:40:17.657383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.300 qpair failed and we were unable to recover it. 00:34:39.300 [2024-07-15 20:40:17.667164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.300 [2024-07-15 20:40:17.667308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.300 [2024-07-15 20:40:17.667334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.300 [2024-07-15 20:40:17.667349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.300 [2024-07-15 20:40:17.667362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.300 [2024-07-15 20:40:17.667391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.677210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.677386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.677413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.677446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.677460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.677504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.687246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.687394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.687421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.687436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.687449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.687479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.697243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.697386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.697413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.697428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.697442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.697472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.707294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.707452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.707479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.707494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.707507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.707552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.717341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.717493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.717521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.717536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.717549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.717593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.727362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.727508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.727534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.727550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.727564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.727592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.737400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.737582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.737608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.737638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.737652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.737680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.747540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.747725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.747751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.747766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.747787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.747832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.757496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.757675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.757702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.757732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.757746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.757776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.767459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.767607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.767633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.767649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.767663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.767693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.777492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.777634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.777660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.777674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.777687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.777718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.787504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.787682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.787709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.787724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.787737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.787767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.797527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.797690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.797717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.797733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.797746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.797776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.301 [2024-07-15 20:40:17.807607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.301 [2024-07-15 20:40:17.807770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.301 [2024-07-15 20:40:17.807796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.301 [2024-07-15 20:40:17.807811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.301 [2024-07-15 20:40:17.807825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.301 [2024-07-15 20:40:17.807870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.301 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-15 20:40:17.817590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.302 [2024-07-15 20:40:17.817734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.302 [2024-07-15 20:40:17.817761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.302 [2024-07-15 20:40:17.817776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.302 [2024-07-15 20:40:17.817789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.302 [2024-07-15 20:40:17.817818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.302 [2024-07-15 20:40:17.827618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.302 [2024-07-15 20:40:17.827765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.302 [2024-07-15 20:40:17.827792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.302 [2024-07-15 20:40:17.827808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.302 [2024-07-15 20:40:17.827821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.302 [2024-07-15 20:40:17.827851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.302 qpair failed and we were unable to recover it. 00:34:39.560 [2024-07-15 20:40:17.837799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.560 [2024-07-15 20:40:17.837959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.560 [2024-07-15 20:40:17.837986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.838007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.838021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.838051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.847709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.561 [2024-07-15 20:40:17.847856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.561 [2024-07-15 20:40:17.847893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.847911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.847924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.847969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.857731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.561 [2024-07-15 20:40:17.857923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.561 [2024-07-15 20:40:17.857951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.857966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.857979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.858009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.867769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.561 [2024-07-15 20:40:17.867907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.561 [2024-07-15 20:40:17.867933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.867949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.867962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.867992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.877767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.561 [2024-07-15 20:40:17.877922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.561 [2024-07-15 20:40:17.877948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.877964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.877978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.878007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.887824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.561 [2024-07-15 20:40:17.887967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.561 [2024-07-15 20:40:17.887994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.888009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.888023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.888053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.897833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.561 [2024-07-15 20:40:17.897987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.561 [2024-07-15 20:40:17.898015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.898034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.898048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.898078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.907900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.561 [2024-07-15 20:40:17.908045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.561 [2024-07-15 20:40:17.908072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.908087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.908100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.908129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.917886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.561 [2024-07-15 20:40:17.918035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.561 [2024-07-15 20:40:17.918061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.918076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.918090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.918119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.927964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.561 [2024-07-15 20:40:17.928135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.561 [2024-07-15 20:40:17.928169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.928186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.928199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.928229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.938001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.561 [2024-07-15 20:40:17.938146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.561 [2024-07-15 20:40:17.938172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.938187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.938200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.938245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.947976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.561 [2024-07-15 20:40:17.948114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.561 [2024-07-15 20:40:17.948140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.948155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.948169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.948198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.958038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.561 [2024-07-15 20:40:17.958206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.561 [2024-07-15 20:40:17.958233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.958248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.958262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.958291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.968124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.561 [2024-07-15 20:40:17.968266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.561 [2024-07-15 20:40:17.968292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.561 [2024-07-15 20:40:17.968307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.561 [2024-07-15 20:40:17.968321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.561 [2024-07-15 20:40:17.968356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.561 qpair failed and we were unable to recover it. 00:34:39.561 [2024-07-15 20:40:17.978068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.562 [2024-07-15 20:40:17.978235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.562 [2024-07-15 20:40:17.978261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.562 [2024-07-15 20:40:17.978290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.562 [2024-07-15 20:40:17.978303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.562 [2024-07-15 20:40:17.978333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.562 qpair failed and we were unable to recover it. 00:34:39.562 [2024-07-15 20:40:17.988119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.562 [2024-07-15 20:40:17.988262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.562 [2024-07-15 20:40:17.988289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.562 [2024-07-15 20:40:17.988304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.562 [2024-07-15 20:40:17.988333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.562 [2024-07-15 20:40:17.988363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.562 qpair failed and we were unable to recover it. 00:34:39.562 [2024-07-15 20:40:17.998205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.562 [2024-07-15 20:40:17.998348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.562 [2024-07-15 20:40:17.998374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.562 [2024-07-15 20:40:17.998389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.562 [2024-07-15 20:40:17.998403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.562 [2024-07-15 20:40:17.998432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.562 qpair failed and we were unable to recover it. 00:34:39.562 [2024-07-15 20:40:18.008149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.562 [2024-07-15 20:40:18.008299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.562 [2024-07-15 20:40:18.008325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.562 [2024-07-15 20:40:18.008341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.562 [2024-07-15 20:40:18.008354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.562 [2024-07-15 20:40:18.008398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.562 qpair failed and we were unable to recover it. 00:34:39.562 [2024-07-15 20:40:18.018206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.562 [2024-07-15 20:40:18.018350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.562 [2024-07-15 20:40:18.018381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.562 [2024-07-15 20:40:18.018397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.562 [2024-07-15 20:40:18.018425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.562 [2024-07-15 20:40:18.018456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.562 qpair failed and we were unable to recover it. 00:34:39.562 [2024-07-15 20:40:18.028214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.562 [2024-07-15 20:40:18.028408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.562 [2024-07-15 20:40:18.028449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.562 [2024-07-15 20:40:18.028464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.562 [2024-07-15 20:40:18.028476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.562 [2024-07-15 20:40:18.028520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.562 qpair failed and we were unable to recover it. 00:34:39.562 [2024-07-15 20:40:18.038311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.562 [2024-07-15 20:40:18.038459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.562 [2024-07-15 20:40:18.038485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.562 [2024-07-15 20:40:18.038500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.562 [2024-07-15 20:40:18.038513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.562 [2024-07-15 20:40:18.038544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.562 qpair failed and we were unable to recover it. 00:34:39.562 [2024-07-15 20:40:18.048273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.562 [2024-07-15 20:40:18.048455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.562 [2024-07-15 20:40:18.048481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.562 [2024-07-15 20:40:18.048496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.562 [2024-07-15 20:40:18.048509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.562 [2024-07-15 20:40:18.048539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.562 qpair failed and we were unable to recover it. 00:34:39.562 [2024-07-15 20:40:18.058321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.562 [2024-07-15 20:40:18.058509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.562 [2024-07-15 20:40:18.058536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.562 [2024-07-15 20:40:18.058551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.562 [2024-07-15 20:40:18.058565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.562 [2024-07-15 20:40:18.058601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.562 qpair failed and we were unable to recover it. 00:34:39.562 [2024-07-15 20:40:18.068331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.562 [2024-07-15 20:40:18.068504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.562 [2024-07-15 20:40:18.068531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.562 [2024-07-15 20:40:18.068546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.562 [2024-07-15 20:40:18.068560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.562 [2024-07-15 20:40:18.068590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.562 qpair failed and we were unable to recover it. 00:34:39.562 [2024-07-15 20:40:18.078419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.562 [2024-07-15 20:40:18.078587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.562 [2024-07-15 20:40:18.078612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.562 [2024-07-15 20:40:18.078641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.562 [2024-07-15 20:40:18.078655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.562 [2024-07-15 20:40:18.078699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.562 qpair failed and we were unable to recover it. 00:34:39.562 [2024-07-15 20:40:18.088369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.562 [2024-07-15 20:40:18.088518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.562 [2024-07-15 20:40:18.088543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.562 [2024-07-15 20:40:18.088558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.562 [2024-07-15 20:40:18.088570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.562 [2024-07-15 20:40:18.088601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.562 qpair failed and we were unable to recover it. 00:34:39.821 [2024-07-15 20:40:18.098411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.821 [2024-07-15 20:40:18.098564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.821 [2024-07-15 20:40:18.098590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.821 [2024-07-15 20:40:18.098604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.821 [2024-07-15 20:40:18.098617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.821 [2024-07-15 20:40:18.098647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.821 qpair failed and we were unable to recover it. 00:34:39.821 [2024-07-15 20:40:18.108428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.821 [2024-07-15 20:40:18.108576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.821 [2024-07-15 20:40:18.108602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.821 [2024-07-15 20:40:18.108617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.821 [2024-07-15 20:40:18.108630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.821 [2024-07-15 20:40:18.108661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.821 qpair failed and we were unable to recover it. 00:34:39.821 [2024-07-15 20:40:18.118565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.821 [2024-07-15 20:40:18.118715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.821 [2024-07-15 20:40:18.118741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.821 [2024-07-15 20:40:18.118756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.821 [2024-07-15 20:40:18.118769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.821 [2024-07-15 20:40:18.118814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.821 qpair failed and we were unable to recover it. 00:34:39.821 [2024-07-15 20:40:18.128498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.821 [2024-07-15 20:40:18.128696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.821 [2024-07-15 20:40:18.128722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.821 [2024-07-15 20:40:18.128738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.821 [2024-07-15 20:40:18.128752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.821 [2024-07-15 20:40:18.128793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.138518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.138684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.138711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.138725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.138739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.138785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.148547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.148707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.148733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.148747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.148782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.148813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.158597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.158752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.158779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.158799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.158827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.158858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.168612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.168774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.168801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.168815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.168828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.168859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.178652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.178788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.178814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.178829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.178842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.178873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.188661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.188829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.188854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.188869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.188891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.188923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.198680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.198828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.198854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.198869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.198894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.198925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.208723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.208868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.208903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.208919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.208932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.208962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.218769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.218960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.218985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.219000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.219013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.219043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.228839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.229000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.229025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.229040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.229052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.229083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.238803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.238963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.238988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.239009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.239024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.239054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.248813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.248970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.249006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.249021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.249036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.249066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.258865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.259049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.259075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.259090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.259103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.259133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.268883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.269065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.822 [2024-07-15 20:40:18.269090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.822 [2024-07-15 20:40:18.269105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.822 [2024-07-15 20:40:18.269118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.822 [2024-07-15 20:40:18.269148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.822 qpair failed and we were unable to recover it. 00:34:39.822 [2024-07-15 20:40:18.278920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.822 [2024-07-15 20:40:18.279063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.823 [2024-07-15 20:40:18.279089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.823 [2024-07-15 20:40:18.279104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.823 [2024-07-15 20:40:18.279117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.823 [2024-07-15 20:40:18.279148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.823 qpair failed and we were unable to recover it. 00:34:39.823 [2024-07-15 20:40:18.288937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.823 [2024-07-15 20:40:18.289084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.823 [2024-07-15 20:40:18.289110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.823 [2024-07-15 20:40:18.289125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.823 [2024-07-15 20:40:18.289138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.823 [2024-07-15 20:40:18.289169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.823 qpair failed and we were unable to recover it. 00:34:39.823 [2024-07-15 20:40:18.299036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.823 [2024-07-15 20:40:18.299221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.823 [2024-07-15 20:40:18.299248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.823 [2024-07-15 20:40:18.299263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.823 [2024-07-15 20:40:18.299277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.823 [2024-07-15 20:40:18.299307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.823 qpair failed and we were unable to recover it. 00:34:39.823 [2024-07-15 20:40:18.309041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.823 [2024-07-15 20:40:18.309242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.823 [2024-07-15 20:40:18.309283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.823 [2024-07-15 20:40:18.309298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.823 [2024-07-15 20:40:18.309311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.823 [2024-07-15 20:40:18.309340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.823 qpair failed and we were unable to recover it. 00:34:39.823 [2024-07-15 20:40:18.319055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.823 [2024-07-15 20:40:18.319208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.823 [2024-07-15 20:40:18.319234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.823 [2024-07-15 20:40:18.319248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.823 [2024-07-15 20:40:18.319262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.823 [2024-07-15 20:40:18.319291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.823 qpair failed and we were unable to recover it. 00:34:39.823 [2024-07-15 20:40:18.329046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.823 [2024-07-15 20:40:18.329187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.823 [2024-07-15 20:40:18.329218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.823 [2024-07-15 20:40:18.329234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.823 [2024-07-15 20:40:18.329248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.823 [2024-07-15 20:40:18.329290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.823 qpair failed and we were unable to recover it. 00:34:39.823 [2024-07-15 20:40:18.339062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.823 [2024-07-15 20:40:18.339211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.823 [2024-07-15 20:40:18.339237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.823 [2024-07-15 20:40:18.339251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.823 [2024-07-15 20:40:18.339265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.823 [2024-07-15 20:40:18.339296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.823 qpair failed and we were unable to recover it. 00:34:39.823 [2024-07-15 20:40:18.349133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.823 [2024-07-15 20:40:18.349348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.823 [2024-07-15 20:40:18.349387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.823 [2024-07-15 20:40:18.349402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.823 [2024-07-15 20:40:18.349416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:39.823 [2024-07-15 20:40:18.349459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:39.823 qpair failed and we were unable to recover it. 00:34:40.083 [2024-07-15 20:40:18.359166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.083 [2024-07-15 20:40:18.359323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.083 [2024-07-15 20:40:18.359348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.083 [2024-07-15 20:40:18.359363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.083 [2024-07-15 20:40:18.359377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.083 [2024-07-15 20:40:18.359407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.083 qpair failed and we were unable to recover it. 00:34:40.083 [2024-07-15 20:40:18.369183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.083 [2024-07-15 20:40:18.369329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.083 [2024-07-15 20:40:18.369355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.083 [2024-07-15 20:40:18.369369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.083 [2024-07-15 20:40:18.369398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.083 [2024-07-15 20:40:18.369435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.083 qpair failed and we were unable to recover it. 00:34:40.083 [2024-07-15 20:40:18.379191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.083 [2024-07-15 20:40:18.379330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.083 [2024-07-15 20:40:18.379355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.083 [2024-07-15 20:40:18.379370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.083 [2024-07-15 20:40:18.379384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.083 [2024-07-15 20:40:18.379413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.083 qpair failed and we were unable to recover it. 00:34:40.083 [2024-07-15 20:40:18.389264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.083 [2024-07-15 20:40:18.389443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.083 [2024-07-15 20:40:18.389468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.083 [2024-07-15 20:40:18.389483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.083 [2024-07-15 20:40:18.389495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.083 [2024-07-15 20:40:18.389523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.083 qpair failed and we were unable to recover it. 00:34:40.083 [2024-07-15 20:40:18.399262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.083 [2024-07-15 20:40:18.399447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.083 [2024-07-15 20:40:18.399486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.083 [2024-07-15 20:40:18.399501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.083 [2024-07-15 20:40:18.399514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.083 [2024-07-15 20:40:18.399557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.083 qpair failed and we were unable to recover it. 00:34:40.083 [2024-07-15 20:40:18.409347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.083 [2024-07-15 20:40:18.409523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.083 [2024-07-15 20:40:18.409548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.083 [2024-07-15 20:40:18.409563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.083 [2024-07-15 20:40:18.409592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.083 [2024-07-15 20:40:18.409621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.083 qpair failed and we were unable to recover it. 00:34:40.083 [2024-07-15 20:40:18.419296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.083 [2024-07-15 20:40:18.419438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.083 [2024-07-15 20:40:18.419469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.083 [2024-07-15 20:40:18.419484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.083 [2024-07-15 20:40:18.419498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.083 [2024-07-15 20:40:18.419527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.083 qpair failed and we were unable to recover it. 00:34:40.083 [2024-07-15 20:40:18.429321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.083 [2024-07-15 20:40:18.429468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.083 [2024-07-15 20:40:18.429494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.083 [2024-07-15 20:40:18.429509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.083 [2024-07-15 20:40:18.429522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.083 [2024-07-15 20:40:18.429551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.083 qpair failed and we were unable to recover it. 00:34:40.083 [2024-07-15 20:40:18.439420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.083 [2024-07-15 20:40:18.439613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.083 [2024-07-15 20:40:18.439637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.083 [2024-07-15 20:40:18.439652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.439665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.439694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.449413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.449577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.449604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.449619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.449647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.449677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.459497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.459638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.459664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.459679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.459693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.459729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.469433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.469581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.469607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.469621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.469635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.469665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.479567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.479726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.479751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.479766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.479780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.479809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.489487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.489656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.489681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.489696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.489710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.489739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.499504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.499663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.499688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.499703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.499717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.499747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.509577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.509730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.509764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.509783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.509811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.509843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.519657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.519809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.519835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.519851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.519865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.519904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.529605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.529760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.529786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.529800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.529815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.529845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.539639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.539784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.539810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.539825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.539839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.539892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.549696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.549854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.549887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.549904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.549923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.549955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.559678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.559828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.559853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.559868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.559891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.559922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.569733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.569890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.569916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.569931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.569944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.084 [2024-07-15 20:40:18.569974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.084 qpair failed and we were unable to recover it. 00:34:40.084 [2024-07-15 20:40:18.579830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.084 [2024-07-15 20:40:18.580025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.084 [2024-07-15 20:40:18.580051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.084 [2024-07-15 20:40:18.580065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.084 [2024-07-15 20:40:18.580079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.085 [2024-07-15 20:40:18.580108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.085 qpair failed and we were unable to recover it. 00:34:40.085 [2024-07-15 20:40:18.589770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.085 [2024-07-15 20:40:18.589923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.085 [2024-07-15 20:40:18.589949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.085 [2024-07-15 20:40:18.589964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.085 [2024-07-15 20:40:18.589977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.085 [2024-07-15 20:40:18.590021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.085 qpair failed and we were unable to recover it. 00:34:40.085 [2024-07-15 20:40:18.599803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.085 [2024-07-15 20:40:18.599971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.085 [2024-07-15 20:40:18.599998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.085 [2024-07-15 20:40:18.600013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.085 [2024-07-15 20:40:18.600027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.085 [2024-07-15 20:40:18.600057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.085 qpair failed and we were unable to recover it. 00:34:40.085 [2024-07-15 20:40:18.609920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.085 [2024-07-15 20:40:18.610085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.085 [2024-07-15 20:40:18.610111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.085 [2024-07-15 20:40:18.610126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.085 [2024-07-15 20:40:18.610140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.085 [2024-07-15 20:40:18.610183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.085 qpair failed and we were unable to recover it. 00:34:40.343 [2024-07-15 20:40:18.619950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.343 [2024-07-15 20:40:18.620107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.343 [2024-07-15 20:40:18.620133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.620148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.620164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.620209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.629898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.630100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.630126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.630141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.630155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.630186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.639943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.640099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.640125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.640149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.640164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.640195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.649949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.650107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.650133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.650147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.650161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.650190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.659999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.660149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.660175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.660190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.660204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.660233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.670029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.670174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.670200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.670214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.670228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.670258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.680030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.680182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.680207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.680222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.680236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.680266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.690049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.690199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.690225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.690240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.690253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.690284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.700114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.700257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.700283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.700297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.700311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.700340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.710187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.710336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.710363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.710379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.710393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.710441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.720272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.720449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.720474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.720488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.720501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.720530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.730193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.730344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.730370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.730391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.730406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.730436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.740198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.740345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.740374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.740390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.740403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.740450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.750252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.750403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.750429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.750444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.750458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.344 [2024-07-15 20:40:18.750488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.344 qpair failed and we were unable to recover it. 00:34:40.344 [2024-07-15 20:40:18.760282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.344 [2024-07-15 20:40:18.760440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.344 [2024-07-15 20:40:18.760465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.344 [2024-07-15 20:40:18.760480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.344 [2024-07-15 20:40:18.760494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.345 [2024-07-15 20:40:18.760523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-15 20:40:18.770356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.345 [2024-07-15 20:40:18.770507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.345 [2024-07-15 20:40:18.770533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.345 [2024-07-15 20:40:18.770547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.345 [2024-07-15 20:40:18.770561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.345 [2024-07-15 20:40:18.770607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-15 20:40:18.780326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.345 [2024-07-15 20:40:18.780471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.345 [2024-07-15 20:40:18.780497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.345 [2024-07-15 20:40:18.780521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.345 [2024-07-15 20:40:18.780534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.345 [2024-07-15 20:40:18.780563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-15 20:40:18.790399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.345 [2024-07-15 20:40:18.790584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.345 [2024-07-15 20:40:18.790609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.345 [2024-07-15 20:40:18.790624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.345 [2024-07-15 20:40:18.790638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.345 [2024-07-15 20:40:18.790667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-15 20:40:18.800384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.345 [2024-07-15 20:40:18.800579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.345 [2024-07-15 20:40:18.800605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.345 [2024-07-15 20:40:18.800620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.345 [2024-07-15 20:40:18.800633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.345 [2024-07-15 20:40:18.800663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-15 20:40:18.810433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.345 [2024-07-15 20:40:18.810584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.345 [2024-07-15 20:40:18.810610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.345 [2024-07-15 20:40:18.810624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.345 [2024-07-15 20:40:18.810638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.345 [2024-07-15 20:40:18.810668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-15 20:40:18.820485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.345 [2024-07-15 20:40:18.820632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.345 [2024-07-15 20:40:18.820663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.345 [2024-07-15 20:40:18.820678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.345 [2024-07-15 20:40:18.820692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.345 [2024-07-15 20:40:18.820737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-15 20:40:18.830525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.345 [2024-07-15 20:40:18.830729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.345 [2024-07-15 20:40:18.830769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.345 [2024-07-15 20:40:18.830783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.345 [2024-07-15 20:40:18.830797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.345 [2024-07-15 20:40:18.830840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-15 20:40:18.840501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.345 [2024-07-15 20:40:18.840655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.345 [2024-07-15 20:40:18.840682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.345 [2024-07-15 20:40:18.840696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.345 [2024-07-15 20:40:18.840710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.345 [2024-07-15 20:40:18.840741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-15 20:40:18.850531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.345 [2024-07-15 20:40:18.850693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.345 [2024-07-15 20:40:18.850719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.345 [2024-07-15 20:40:18.850733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.345 [2024-07-15 20:40:18.850747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.345 [2024-07-15 20:40:18.850777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-15 20:40:18.860552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.345 [2024-07-15 20:40:18.860701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.345 [2024-07-15 20:40:18.860726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.345 [2024-07-15 20:40:18.860741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.345 [2024-07-15 20:40:18.860755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.345 [2024-07-15 20:40:18.860805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.345 [2024-07-15 20:40:18.870604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.345 [2024-07-15 20:40:18.870790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.345 [2024-07-15 20:40:18.870815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.345 [2024-07-15 20:40:18.870830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.345 [2024-07-15 20:40:18.870844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.345 [2024-07-15 20:40:18.870897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.345 qpair failed and we were unable to recover it. 00:34:40.604 [2024-07-15 20:40:18.880634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.604 [2024-07-15 20:40:18.880809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.604 [2024-07-15 20:40:18.880835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.604 [2024-07-15 20:40:18.880849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.604 [2024-07-15 20:40:18.880864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.604 [2024-07-15 20:40:18.880902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.604 qpair failed and we were unable to recover it. 00:34:40.604 [2024-07-15 20:40:18.890668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.604 [2024-07-15 20:40:18.890851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.604 [2024-07-15 20:40:18.890887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.604 [2024-07-15 20:40:18.890911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.604 [2024-07-15 20:40:18.890926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.604 [2024-07-15 20:40:18.890957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.604 qpair failed and we were unable to recover it. 00:34:40.604 [2024-07-15 20:40:18.900713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.604 [2024-07-15 20:40:18.900860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.604 [2024-07-15 20:40:18.900896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.604 [2024-07-15 20:40:18.900912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.604 [2024-07-15 20:40:18.900926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.604 [2024-07-15 20:40:18.900970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.604 qpair failed and we were unable to recover it. 00:34:40.604 [2024-07-15 20:40:18.910704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.604 [2024-07-15 20:40:18.910871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.604 [2024-07-15 20:40:18.910910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.604 [2024-07-15 20:40:18.910926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.604 [2024-07-15 20:40:18.910940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.604 [2024-07-15 20:40:18.910970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.604 qpair failed and we were unable to recover it. 00:34:40.604 [2024-07-15 20:40:18.920735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.604 [2024-07-15 20:40:18.920905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.604 [2024-07-15 20:40:18.920931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.604 [2024-07-15 20:40:18.920946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.604 [2024-07-15 20:40:18.920960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.604 [2024-07-15 20:40:18.920992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.604 qpair failed and we were unable to recover it. 00:34:40.604 [2024-07-15 20:40:18.930854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.604 [2024-07-15 20:40:18.931011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.604 [2024-07-15 20:40:18.931037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.604 [2024-07-15 20:40:18.931052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.604 [2024-07-15 20:40:18.931066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.604 [2024-07-15 20:40:18.931096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.604 qpair failed and we were unable to recover it. 00:34:40.604 [2024-07-15 20:40:18.940822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.604 [2024-07-15 20:40:18.940980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.604 [2024-07-15 20:40:18.941007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.604 [2024-07-15 20:40:18.941022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.604 [2024-07-15 20:40:18.941036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.604 [2024-07-15 20:40:18.941080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.604 qpair failed and we were unable to recover it. 00:34:40.604 [2024-07-15 20:40:18.950856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.604 [2024-07-15 20:40:18.951020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.604 [2024-07-15 20:40:18.951046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.604 [2024-07-15 20:40:18.951060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.604 [2024-07-15 20:40:18.951080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.604 [2024-07-15 20:40:18.951111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.604 qpair failed and we were unable to recover it. 00:34:40.604 [2024-07-15 20:40:18.960834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.604 [2024-07-15 20:40:18.960996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.604 [2024-07-15 20:40:18.961021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.604 [2024-07-15 20:40:18.961036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.604 [2024-07-15 20:40:18.961050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.604 [2024-07-15 20:40:18.961080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.604 qpair failed and we were unable to recover it. 00:34:40.604 [2024-07-15 20:40:18.970888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.604 [2024-07-15 20:40:18.971038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.604 [2024-07-15 20:40:18.971063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.604 [2024-07-15 20:40:18.971078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.604 [2024-07-15 20:40:18.971092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.604 [2024-07-15 20:40:18.971121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.604 qpair failed and we were unable to recover it. 00:34:40.604 [2024-07-15 20:40:18.980918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.604 [2024-07-15 20:40:18.981089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.604 [2024-07-15 20:40:18.981116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.604 [2024-07-15 20:40:18.981135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.604 [2024-07-15 20:40:18.981152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.604 [2024-07-15 20:40:18.981199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.604 qpair failed and we were unable to recover it. 00:34:40.604 [2024-07-15 20:40:18.990940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.604 [2024-07-15 20:40:18.991090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.604 [2024-07-15 20:40:18.991116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.604 [2024-07-15 20:40:18.991130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.604 [2024-07-15 20:40:18.991144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.604 [2024-07-15 20:40:18.991175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.604 qpair failed and we were unable to recover it. 00:34:40.604 [2024-07-15 20:40:19.000985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.604 [2024-07-15 20:40:19.001140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.604 [2024-07-15 20:40:19.001166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.604 [2024-07-15 20:40:19.001181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.604 [2024-07-15 20:40:19.001195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.604 [2024-07-15 20:40:19.001226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.604 qpair failed and we were unable to recover it. 00:34:40.605 [2024-07-15 20:40:19.011011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.605 [2024-07-15 20:40:19.011154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.605 [2024-07-15 20:40:19.011180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.605 [2024-07-15 20:40:19.011195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.605 [2024-07-15 20:40:19.011209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.605 [2024-07-15 20:40:19.011239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.605 qpair failed and we were unable to recover it. 00:34:40.605 [2024-07-15 20:40:19.021000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.605 [2024-07-15 20:40:19.021142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.605 [2024-07-15 20:40:19.021168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.605 [2024-07-15 20:40:19.021183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.605 [2024-07-15 20:40:19.021197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.605 [2024-07-15 20:40:19.021226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.605 qpair failed and we were unable to recover it. 00:34:40.605 [2024-07-15 20:40:19.031039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.605 [2024-07-15 20:40:19.031210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.605 [2024-07-15 20:40:19.031236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.605 [2024-07-15 20:40:19.031265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.605 [2024-07-15 20:40:19.031280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.605 [2024-07-15 20:40:19.031308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.605 qpair failed and we were unable to recover it. 00:34:40.605 [2024-07-15 20:40:19.041063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.605 [2024-07-15 20:40:19.041265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.605 [2024-07-15 20:40:19.041291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.605 [2024-07-15 20:40:19.041311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.605 [2024-07-15 20:40:19.041326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.605 [2024-07-15 20:40:19.041357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.605 qpair failed and we were unable to recover it. 00:34:40.605 [2024-07-15 20:40:19.051077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.605 [2024-07-15 20:40:19.051231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.605 [2024-07-15 20:40:19.051256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.605 [2024-07-15 20:40:19.051270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.605 [2024-07-15 20:40:19.051284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.605 [2024-07-15 20:40:19.051313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.605 qpair failed and we were unable to recover it. 00:34:40.605 [2024-07-15 20:40:19.061142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.605 [2024-07-15 20:40:19.061292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.605 [2024-07-15 20:40:19.061319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.605 [2024-07-15 20:40:19.061335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.605 [2024-07-15 20:40:19.061349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.605 [2024-07-15 20:40:19.061406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.605 qpair failed and we were unable to recover it. 00:34:40.605 [2024-07-15 20:40:19.071138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.605 [2024-07-15 20:40:19.071284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.605 [2024-07-15 20:40:19.071310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.605 [2024-07-15 20:40:19.071324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.605 [2024-07-15 20:40:19.071337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.605 [2024-07-15 20:40:19.071366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.605 qpair failed and we were unable to recover it. 00:34:40.605 [2024-07-15 20:40:19.081193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.605 [2024-07-15 20:40:19.081349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.605 [2024-07-15 20:40:19.081375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.605 [2024-07-15 20:40:19.081390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.605 [2024-07-15 20:40:19.081404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.605 [2024-07-15 20:40:19.081433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.605 qpair failed and we were unable to recover it. 00:34:40.605 [2024-07-15 20:40:19.091256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.605 [2024-07-15 20:40:19.091403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.605 [2024-07-15 20:40:19.091430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.605 [2024-07-15 20:40:19.091445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.605 [2024-07-15 20:40:19.091458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.605 [2024-07-15 20:40:19.091488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.605 qpair failed and we were unable to recover it. 00:34:40.605 [2024-07-15 20:40:19.101231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.605 [2024-07-15 20:40:19.101372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.605 [2024-07-15 20:40:19.101400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.605 [2024-07-15 20:40:19.101415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.605 [2024-07-15 20:40:19.101428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.605 [2024-07-15 20:40:19.101473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.605 qpair failed and we were unable to recover it. 00:34:40.605 [2024-07-15 20:40:19.111347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.605 [2024-07-15 20:40:19.111506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.605 [2024-07-15 20:40:19.111533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.605 [2024-07-15 20:40:19.111548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.605 [2024-07-15 20:40:19.111562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.605 [2024-07-15 20:40:19.111591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.605 qpair failed and we were unable to recover it. 00:34:40.605 [2024-07-15 20:40:19.121323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.605 [2024-07-15 20:40:19.121518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.605 [2024-07-15 20:40:19.121544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.605 [2024-07-15 20:40:19.121560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.605 [2024-07-15 20:40:19.121573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.605 [2024-07-15 20:40:19.121603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.605 qpair failed and we were unable to recover it. 00:34:40.605 [2024-07-15 20:40:19.131348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.605 [2024-07-15 20:40:19.131508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.605 [2024-07-15 20:40:19.131534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.605 [2024-07-15 20:40:19.131555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.605 [2024-07-15 20:40:19.131569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.605 [2024-07-15 20:40:19.131614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.605 qpair failed and we were unable to recover it. 00:34:40.864 [2024-07-15 20:40:19.141419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.864 [2024-07-15 20:40:19.141622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.864 [2024-07-15 20:40:19.141648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.864 [2024-07-15 20:40:19.141663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.864 [2024-07-15 20:40:19.141676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.864 [2024-07-15 20:40:19.141706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.864 qpair failed and we were unable to recover it. 00:34:40.864 [2024-07-15 20:40:19.151409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.864 [2024-07-15 20:40:19.151576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.864 [2024-07-15 20:40:19.151602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.864 [2024-07-15 20:40:19.151616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.864 [2024-07-15 20:40:19.151629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.864 [2024-07-15 20:40:19.151658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.864 qpair failed and we were unable to recover it. 00:34:40.864 [2024-07-15 20:40:19.161407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.864 [2024-07-15 20:40:19.161566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.864 [2024-07-15 20:40:19.161592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.864 [2024-07-15 20:40:19.161607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.864 [2024-07-15 20:40:19.161621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.864 [2024-07-15 20:40:19.161650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.864 qpair failed and we were unable to recover it. 00:34:40.864 [2024-07-15 20:40:19.171458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.864 [2024-07-15 20:40:19.171602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.864 [2024-07-15 20:40:19.171628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.864 [2024-07-15 20:40:19.171653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.864 [2024-07-15 20:40:19.171666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.864 [2024-07-15 20:40:19.171710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.864 qpair failed and we were unable to recover it. 00:34:40.864 [2024-07-15 20:40:19.181501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.864 [2024-07-15 20:40:19.181665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.864 [2024-07-15 20:40:19.181692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.864 [2024-07-15 20:40:19.181707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.864 [2024-07-15 20:40:19.181720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.181749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.865 [2024-07-15 20:40:19.191474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.865 [2024-07-15 20:40:19.191616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.865 [2024-07-15 20:40:19.191642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.865 [2024-07-15 20:40:19.191657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.865 [2024-07-15 20:40:19.191670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.191701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.865 [2024-07-15 20:40:19.201523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.865 [2024-07-15 20:40:19.201681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.865 [2024-07-15 20:40:19.201708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.865 [2024-07-15 20:40:19.201722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.865 [2024-07-15 20:40:19.201735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.201765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.865 [2024-07-15 20:40:19.211571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.865 [2024-07-15 20:40:19.211714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.865 [2024-07-15 20:40:19.211740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.865 [2024-07-15 20:40:19.211755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.865 [2024-07-15 20:40:19.211767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.211798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.865 [2024-07-15 20:40:19.221577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.865 [2024-07-15 20:40:19.221732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.865 [2024-07-15 20:40:19.221765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.865 [2024-07-15 20:40:19.221781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.865 [2024-07-15 20:40:19.221793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.221838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.865 [2024-07-15 20:40:19.231624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.865 [2024-07-15 20:40:19.231770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.865 [2024-07-15 20:40:19.231797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.865 [2024-07-15 20:40:19.231812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.865 [2024-07-15 20:40:19.231825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.231855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.865 [2024-07-15 20:40:19.241689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.865 [2024-07-15 20:40:19.241870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.865 [2024-07-15 20:40:19.241907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.865 [2024-07-15 20:40:19.241923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.865 [2024-07-15 20:40:19.241936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.241966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.865 [2024-07-15 20:40:19.251671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.865 [2024-07-15 20:40:19.251822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.865 [2024-07-15 20:40:19.251848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.865 [2024-07-15 20:40:19.251864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.865 [2024-07-15 20:40:19.251886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.251919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.865 [2024-07-15 20:40:19.261770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.865 [2024-07-15 20:40:19.261925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.865 [2024-07-15 20:40:19.261951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.865 [2024-07-15 20:40:19.261966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.865 [2024-07-15 20:40:19.261979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.262016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.865 [2024-07-15 20:40:19.271722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.865 [2024-07-15 20:40:19.271874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.865 [2024-07-15 20:40:19.271910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.865 [2024-07-15 20:40:19.271928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.865 [2024-07-15 20:40:19.271942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.271974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.865 [2024-07-15 20:40:19.281746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.865 [2024-07-15 20:40:19.281939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.865 [2024-07-15 20:40:19.281978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.865 [2024-07-15 20:40:19.281993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.865 [2024-07-15 20:40:19.282006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.282036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.865 [2024-07-15 20:40:19.291798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.865 [2024-07-15 20:40:19.291981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.865 [2024-07-15 20:40:19.292008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.865 [2024-07-15 20:40:19.292022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.865 [2024-07-15 20:40:19.292034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.292064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.865 [2024-07-15 20:40:19.301848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.865 [2024-07-15 20:40:19.302072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.865 [2024-07-15 20:40:19.302097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.865 [2024-07-15 20:40:19.302112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.865 [2024-07-15 20:40:19.302125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.302155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.865 [2024-07-15 20:40:19.311893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.865 [2024-07-15 20:40:19.312071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.865 [2024-07-15 20:40:19.312102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.865 [2024-07-15 20:40:19.312118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.865 [2024-07-15 20:40:19.312131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.865 [2024-07-15 20:40:19.312161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.865 qpair failed and we were unable to recover it. 00:34:40.866 [2024-07-15 20:40:19.321860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.866 [2024-07-15 20:40:19.322015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.866 [2024-07-15 20:40:19.322042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.866 [2024-07-15 20:40:19.322058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.866 [2024-07-15 20:40:19.322071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.866 [2024-07-15 20:40:19.322101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.866 qpair failed and we were unable to recover it. 00:34:40.866 [2024-07-15 20:40:19.331907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.866 [2024-07-15 20:40:19.332090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.866 [2024-07-15 20:40:19.332119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.866 [2024-07-15 20:40:19.332142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.866 [2024-07-15 20:40:19.332156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.866 [2024-07-15 20:40:19.332188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.866 qpair failed and we were unable to recover it. 00:34:40.866 [2024-07-15 20:40:19.341921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.866 [2024-07-15 20:40:19.342062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.866 [2024-07-15 20:40:19.342090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.866 [2024-07-15 20:40:19.342105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.866 [2024-07-15 20:40:19.342118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.866 [2024-07-15 20:40:19.342149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.866 qpair failed and we were unable to recover it. 00:34:40.866 [2024-07-15 20:40:19.351979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.866 [2024-07-15 20:40:19.352118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.866 [2024-07-15 20:40:19.352145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.866 [2024-07-15 20:40:19.352160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.866 [2024-07-15 20:40:19.352181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.866 [2024-07-15 20:40:19.352212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.866 qpair failed and we were unable to recover it. 00:34:40.866 [2024-07-15 20:40:19.362016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.866 [2024-07-15 20:40:19.362205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.866 [2024-07-15 20:40:19.362232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.866 [2024-07-15 20:40:19.362247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.866 [2024-07-15 20:40:19.362261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.866 [2024-07-15 20:40:19.362291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.866 qpair failed and we were unable to recover it. 00:34:40.866 [2024-07-15 20:40:19.372039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.866 [2024-07-15 20:40:19.372226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.866 [2024-07-15 20:40:19.372252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.866 [2024-07-15 20:40:19.372267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.866 [2024-07-15 20:40:19.372281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.866 [2024-07-15 20:40:19.372311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.866 qpair failed and we were unable to recover it. 00:34:40.866 [2024-07-15 20:40:19.382031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.866 [2024-07-15 20:40:19.382172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.866 [2024-07-15 20:40:19.382199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.866 [2024-07-15 20:40:19.382214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.866 [2024-07-15 20:40:19.382228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.866 [2024-07-15 20:40:19.382257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.866 qpair failed and we were unable to recover it. 00:34:40.866 [2024-07-15 20:40:19.392081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.866 [2024-07-15 20:40:19.392230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.866 [2024-07-15 20:40:19.392257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.866 [2024-07-15 20:40:19.392273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.866 [2024-07-15 20:40:19.392286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:40.866 [2024-07-15 20:40:19.392329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.866 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-15 20:40:19.402104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.125 [2024-07-15 20:40:19.402278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.125 [2024-07-15 20:40:19.402305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.125 [2024-07-15 20:40:19.402320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.125 [2024-07-15 20:40:19.402333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.125 [2024-07-15 20:40:19.402379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-15 20:40:19.412159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.125 [2024-07-15 20:40:19.412331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.125 [2024-07-15 20:40:19.412358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.125 [2024-07-15 20:40:19.412372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.125 [2024-07-15 20:40:19.412385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.125 [2024-07-15 20:40:19.412430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-15 20:40:19.422135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.125 [2024-07-15 20:40:19.422317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.125 [2024-07-15 20:40:19.422343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.125 [2024-07-15 20:40:19.422358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.125 [2024-07-15 20:40:19.422372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.125 [2024-07-15 20:40:19.422402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-15 20:40:19.432190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.125 [2024-07-15 20:40:19.432333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.125 [2024-07-15 20:40:19.432359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.125 [2024-07-15 20:40:19.432374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.125 [2024-07-15 20:40:19.432389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.125 [2024-07-15 20:40:19.432435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-15 20:40:19.442235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.125 [2024-07-15 20:40:19.442379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.125 [2024-07-15 20:40:19.442403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.125 [2024-07-15 20:40:19.442418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.125 [2024-07-15 20:40:19.442436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.125 [2024-07-15 20:40:19.442465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-15 20:40:19.452225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.125 [2024-07-15 20:40:19.452370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.125 [2024-07-15 20:40:19.452396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.125 [2024-07-15 20:40:19.452411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.125 [2024-07-15 20:40:19.452424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.125 [2024-07-15 20:40:19.452453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-15 20:40:19.462284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.125 [2024-07-15 20:40:19.462425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.125 [2024-07-15 20:40:19.462461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.125 [2024-07-15 20:40:19.462476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.125 [2024-07-15 20:40:19.462488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.125 [2024-07-15 20:40:19.462519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-15 20:40:19.472294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.125 [2024-07-15 20:40:19.472436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.125 [2024-07-15 20:40:19.472463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.125 [2024-07-15 20:40:19.472478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.125 [2024-07-15 20:40:19.472491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.125 [2024-07-15 20:40:19.472520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-15 20:40:19.482341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.125 [2024-07-15 20:40:19.482494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.125 [2024-07-15 20:40:19.482520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.125 [2024-07-15 20:40:19.482535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.125 [2024-07-15 20:40:19.482549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.125 [2024-07-15 20:40:19.482580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-15 20:40:19.492387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.125 [2024-07-15 20:40:19.492539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.125 [2024-07-15 20:40:19.492564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.125 [2024-07-15 20:40:19.492579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.125 [2024-07-15 20:40:19.492593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.125 [2024-07-15 20:40:19.492623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.125 qpair failed and we were unable to recover it. 00:34:41.125 [2024-07-15 20:40:19.502374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.125 [2024-07-15 20:40:19.502519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.125 [2024-07-15 20:40:19.502545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.125 [2024-07-15 20:40:19.502560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.502575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.502604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.512472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.512641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.512668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.512703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.512719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.512748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.522455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.522638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.522665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.522680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.522694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.522724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.532465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.532608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.532635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.532655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.532669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.532700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.542490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.542637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.542664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.542679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.542693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.542723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.552529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.552678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.552704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.552719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.552733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.552777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.562577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.562722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.562749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.562765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.562779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.562823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.572575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.572727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.572753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.572768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.572782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.572811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.582610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.582764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.582792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.582808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.582824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.582869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.592648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.592791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.592817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.592832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.592845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.592887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.602682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.602835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.602863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.602889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.602904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.602935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.612723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.612885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.612913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.612928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.612941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.612971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.622720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.622861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.622901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.622919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.622933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.622963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.632755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.632902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.632928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.632944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.632957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.126 [2024-07-15 20:40:19.632987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.126 qpair failed and we were unable to recover it. 00:34:41.126 [2024-07-15 20:40:19.642792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.126 [2024-07-15 20:40:19.642950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.126 [2024-07-15 20:40:19.642977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.126 [2024-07-15 20:40:19.642992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.126 [2024-07-15 20:40:19.643006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.127 [2024-07-15 20:40:19.643035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-15 20:40:19.652844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.127 [2024-07-15 20:40:19.653003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.127 [2024-07-15 20:40:19.653030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.127 [2024-07-15 20:40:19.653046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.127 [2024-07-15 20:40:19.653059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.127 [2024-07-15 20:40:19.653091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.385 [2024-07-15 20:40:19.662924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.385 [2024-07-15 20:40:19.663083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.385 [2024-07-15 20:40:19.663109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.385 [2024-07-15 20:40:19.663125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.385 [2024-07-15 20:40:19.663138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.385 [2024-07-15 20:40:19.663174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.385 qpair failed and we were unable to recover it. 00:34:41.385 [2024-07-15 20:40:19.672953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.385 [2024-07-15 20:40:19.673130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.385 [2024-07-15 20:40:19.673156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.385 [2024-07-15 20:40:19.673172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.385 [2024-07-15 20:40:19.673185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.385 [2024-07-15 20:40:19.673214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.385 qpair failed and we were unable to recover it. 00:34:41.385 [2024-07-15 20:40:19.682926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.385 [2024-07-15 20:40:19.683090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.385 [2024-07-15 20:40:19.683116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.385 [2024-07-15 20:40:19.683131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.385 [2024-07-15 20:40:19.683144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.385 [2024-07-15 20:40:19.683188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.385 qpair failed and we were unable to recover it. 00:34:41.385 [2024-07-15 20:40:19.692921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.385 [2024-07-15 20:40:19.693098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.385 [2024-07-15 20:40:19.693125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.385 [2024-07-15 20:40:19.693140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.385 [2024-07-15 20:40:19.693154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.385 [2024-07-15 20:40:19.693185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.385 qpair failed and we were unable to recover it. 00:34:41.385 [2024-07-15 20:40:19.703008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.385 [2024-07-15 20:40:19.703157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.385 [2024-07-15 20:40:19.703183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.385 [2024-07-15 20:40:19.703198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.385 [2024-07-15 20:40:19.703212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.385 [2024-07-15 20:40:19.703255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.385 qpair failed and we were unable to recover it. 00:34:41.385 [2024-07-15 20:40:19.713000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.385 [2024-07-15 20:40:19.713145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.385 [2024-07-15 20:40:19.713176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.385 [2024-07-15 20:40:19.713193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.385 [2024-07-15 20:40:19.713207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.385 [2024-07-15 20:40:19.713237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.385 qpair failed and we were unable to recover it. 00:34:41.385 [2024-07-15 20:40:19.723024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.385 [2024-07-15 20:40:19.723204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.385 [2024-07-15 20:40:19.723230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.385 [2024-07-15 20:40:19.723245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.723259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.723289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.733060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.733249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.733276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.733291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.733304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.733333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.743093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.743243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.743270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.743286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.743302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.743347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.753111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.753258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.753285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.753300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.753313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.753363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.763170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.763336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.763363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.763378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.763391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.763420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.773222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.773418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.773460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.773474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.773487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.773529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.783210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.783390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.783416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.783432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.783445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.783474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.793201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.793343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.793369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.793384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.793397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.793427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.803288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.803444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.803471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.803486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.803499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.803530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.813325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.813477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.813503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.813518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.813531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.813577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.823320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.823461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.823488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.823502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.823515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.823558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.833324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.833500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.833526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.833542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.833555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.833584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.843398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.843583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.843610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.843648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.843669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.843700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.853416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.853570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.853598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.853615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.853632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.853677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.863376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.386 [2024-07-15 20:40:19.863524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.386 [2024-07-15 20:40:19.863551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.386 [2024-07-15 20:40:19.863566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.386 [2024-07-15 20:40:19.863579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.386 [2024-07-15 20:40:19.863610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.386 qpair failed and we were unable to recover it. 00:34:41.386 [2024-07-15 20:40:19.873428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.387 [2024-07-15 20:40:19.873573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.387 [2024-07-15 20:40:19.873600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.387 [2024-07-15 20:40:19.873615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.387 [2024-07-15 20:40:19.873629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.387 [2024-07-15 20:40:19.873658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.387 qpair failed and we were unable to recover it. 00:34:41.387 [2024-07-15 20:40:19.883528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.387 [2024-07-15 20:40:19.883699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.387 [2024-07-15 20:40:19.883725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.387 [2024-07-15 20:40:19.883756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.387 [2024-07-15 20:40:19.883769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.387 [2024-07-15 20:40:19.883812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.387 qpair failed and we were unable to recover it. 00:34:41.387 [2024-07-15 20:40:19.893472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.387 [2024-07-15 20:40:19.893615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.387 [2024-07-15 20:40:19.893641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.387 [2024-07-15 20:40:19.893656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.387 [2024-07-15 20:40:19.893670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.387 [2024-07-15 20:40:19.893700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.387 qpair failed and we were unable to recover it. 00:34:41.387 [2024-07-15 20:40:19.903544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.387 [2024-07-15 20:40:19.903694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.387 [2024-07-15 20:40:19.903720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.387 [2024-07-15 20:40:19.903735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.387 [2024-07-15 20:40:19.903749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.387 [2024-07-15 20:40:19.903778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.387 qpair failed and we were unable to recover it. 00:34:41.387 [2024-07-15 20:40:19.913559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.387 [2024-07-15 20:40:19.913746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.387 [2024-07-15 20:40:19.913773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.387 [2024-07-15 20:40:19.913803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.387 [2024-07-15 20:40:19.913816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.387 [2024-07-15 20:40:19.913860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.387 qpair failed and we were unable to recover it. 00:34:41.647 [2024-07-15 20:40:19.923582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.647 [2024-07-15 20:40:19.923735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.647 [2024-07-15 20:40:19.923762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.647 [2024-07-15 20:40:19.923777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.647 [2024-07-15 20:40:19.923791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.647 [2024-07-15 20:40:19.923821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.647 qpair failed and we were unable to recover it. 00:34:41.647 [2024-07-15 20:40:19.933686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.647 [2024-07-15 20:40:19.933831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.647 [2024-07-15 20:40:19.933858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.647 [2024-07-15 20:40:19.933887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.647 [2024-07-15 20:40:19.933903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.647 [2024-07-15 20:40:19.933933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.647 qpair failed and we were unable to recover it. 00:34:41.647 [2024-07-15 20:40:19.943637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.647 [2024-07-15 20:40:19.943776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.647 [2024-07-15 20:40:19.943803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.647 [2024-07-15 20:40:19.943819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.647 [2024-07-15 20:40:19.943832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.647 [2024-07-15 20:40:19.943862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.647 qpair failed and we were unable to recover it. 00:34:41.647 [2024-07-15 20:40:19.953714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.647 [2024-07-15 20:40:19.953871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.647 [2024-07-15 20:40:19.953905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.647 [2024-07-15 20:40:19.953921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.647 [2024-07-15 20:40:19.953934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.647 [2024-07-15 20:40:19.953965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.647 qpair failed and we were unable to recover it. 00:34:41.647 [2024-07-15 20:40:19.963698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.647 [2024-07-15 20:40:19.963919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.647 [2024-07-15 20:40:19.963947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.647 [2024-07-15 20:40:19.963963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.647 [2024-07-15 20:40:19.963980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.647 [2024-07-15 20:40:19.964011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.647 qpair failed and we were unable to recover it. 00:34:41.647 [2024-07-15 20:40:19.973710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.647 [2024-07-15 20:40:19.973858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.647 [2024-07-15 20:40:19.973892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.647 [2024-07-15 20:40:19.973909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.647 [2024-07-15 20:40:19.973922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.647 [2024-07-15 20:40:19.973954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.647 qpair failed and we were unable to recover it. 00:34:41.647 [2024-07-15 20:40:19.983738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.647 [2024-07-15 20:40:19.983885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.647 [2024-07-15 20:40:19.983912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.647 [2024-07-15 20:40:19.983927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.647 [2024-07-15 20:40:19.983940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.647 [2024-07-15 20:40:19.983970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.647 qpair failed and we were unable to recover it. 00:34:41.647 [2024-07-15 20:40:19.993850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.647 [2024-07-15 20:40:19.994006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.647 [2024-07-15 20:40:19.994033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.647 [2024-07-15 20:40:19.994049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.647 [2024-07-15 20:40:19.994062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.647 [2024-07-15 20:40:19.994092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.647 qpair failed and we were unable to recover it. 00:34:41.647 [2024-07-15 20:40:20.003791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.647 [2024-07-15 20:40:20.003961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.647 [2024-07-15 20:40:20.003989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.647 [2024-07-15 20:40:20.004005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.647 [2024-07-15 20:40:20.004018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.647 [2024-07-15 20:40:20.004049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.647 qpair failed and we were unable to recover it. 00:34:41.647 [2024-07-15 20:40:20.013822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.647 [2024-07-15 20:40:20.014022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.647 [2024-07-15 20:40:20.014049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.647 [2024-07-15 20:40:20.014065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.647 [2024-07-15 20:40:20.014078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.647 [2024-07-15 20:40:20.014107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.647 qpair failed and we were unable to recover it. 00:34:41.647 [2024-07-15 20:40:20.023949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.647 [2024-07-15 20:40:20.024096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.647 [2024-07-15 20:40:20.024129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.647 [2024-07-15 20:40:20.024146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.647 [2024-07-15 20:40:20.024159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.647 [2024-07-15 20:40:20.024189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.647 qpair failed and we were unable to recover it. 00:34:41.647 [2024-07-15 20:40:20.033908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.034073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.034101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.034117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.034131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.034161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.043957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.044113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.044139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.044155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.044168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.044198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.053987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.054149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.054178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.054194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.054208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.054239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.063987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.064132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.064158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.064174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.064188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.064241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.073998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.074143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.074170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.074184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.074198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.074231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.084074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.084262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.084303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.084319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.084331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.084362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.094103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.094293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.094319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.094349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.094363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.094407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.104071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.104221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.104247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.104261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.104275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.104318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.114125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.114268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.114302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.114318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.114330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.114375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.124159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.124310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.124336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.124350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.124378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.124408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.134166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.134324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.134349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.134364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.134377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.134421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.144181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.144341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.144367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.144382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.144395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.144426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.154244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.154382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.154408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.154423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.154438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.154487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.164236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.164459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.648 [2024-07-15 20:40:20.164484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.648 [2024-07-15 20:40:20.164498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.648 [2024-07-15 20:40:20.164513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.648 [2024-07-15 20:40:20.164541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.648 qpair failed and we were unable to recover it. 00:34:41.648 [2024-07-15 20:40:20.174345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.648 [2024-07-15 20:40:20.174527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.649 [2024-07-15 20:40:20.174570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.649 [2024-07-15 20:40:20.174585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.649 [2024-07-15 20:40:20.174599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.649 [2024-07-15 20:40:20.174642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.649 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-15 20:40:20.184339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.907 [2024-07-15 20:40:20.184523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.907 [2024-07-15 20:40:20.184551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.907 [2024-07-15 20:40:20.184584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.907 [2024-07-15 20:40:20.184599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.907 [2024-07-15 20:40:20.184630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-15 20:40:20.194309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.907 [2024-07-15 20:40:20.194453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.907 [2024-07-15 20:40:20.194480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.907 [2024-07-15 20:40:20.194495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.907 [2024-07-15 20:40:20.194508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.907 [2024-07-15 20:40:20.194539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-15 20:40:20.204384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.907 [2024-07-15 20:40:20.204584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.907 [2024-07-15 20:40:20.204615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.907 [2024-07-15 20:40:20.204631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.907 [2024-07-15 20:40:20.204645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.907 [2024-07-15 20:40:20.204675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-15 20:40:20.214382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.907 [2024-07-15 20:40:20.214527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.907 [2024-07-15 20:40:20.214553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.907 [2024-07-15 20:40:20.214568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.907 [2024-07-15 20:40:20.214581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.907 [2024-07-15 20:40:20.214612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-15 20:40:20.224419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.907 [2024-07-15 20:40:20.224556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.907 [2024-07-15 20:40:20.224582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.907 [2024-07-15 20:40:20.224597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.907 [2024-07-15 20:40:20.224610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.907 [2024-07-15 20:40:20.224640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-15 20:40:20.234456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.907 [2024-07-15 20:40:20.234596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.907 [2024-07-15 20:40:20.234622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.907 [2024-07-15 20:40:20.234637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.907 [2024-07-15 20:40:20.234650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.907 [2024-07-15 20:40:20.234681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-15 20:40:20.244466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.907 [2024-07-15 20:40:20.244625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.907 [2024-07-15 20:40:20.244651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.907 [2024-07-15 20:40:20.244665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.907 [2024-07-15 20:40:20.244685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.907 [2024-07-15 20:40:20.244715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-15 20:40:20.254490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.907 [2024-07-15 20:40:20.254634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.907 [2024-07-15 20:40:20.254660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.907 [2024-07-15 20:40:20.254675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.907 [2024-07-15 20:40:20.254688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.907 [2024-07-15 20:40:20.254718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-15 20:40:20.264507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.907 [2024-07-15 20:40:20.264648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.907 [2024-07-15 20:40:20.264674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.907 [2024-07-15 20:40:20.264689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.907 [2024-07-15 20:40:20.264702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.907 [2024-07-15 20:40:20.264732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.907 qpair failed and we were unable to recover it. 00:34:41.907 [2024-07-15 20:40:20.274578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.907 [2024-07-15 20:40:20.274723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.907 [2024-07-15 20:40:20.274749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.907 [2024-07-15 20:40:20.274763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.908 [2024-07-15 20:40:20.274776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.908 [2024-07-15 20:40:20.274807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-15 20:40:20.284620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.908 [2024-07-15 20:40:20.284768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.908 [2024-07-15 20:40:20.284794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.908 [2024-07-15 20:40:20.284812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.908 [2024-07-15 20:40:20.284841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.908 [2024-07-15 20:40:20.284871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-15 20:40:20.294600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.908 [2024-07-15 20:40:20.294752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.908 [2024-07-15 20:40:20.294778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.908 [2024-07-15 20:40:20.294793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.908 [2024-07-15 20:40:20.294806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.908 [2024-07-15 20:40:20.294836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-15 20:40:20.304644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.908 [2024-07-15 20:40:20.304792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.908 [2024-07-15 20:40:20.304818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.908 [2024-07-15 20:40:20.304833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.908 [2024-07-15 20:40:20.304846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.908 [2024-07-15 20:40:20.304885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-15 20:40:20.314648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.908 [2024-07-15 20:40:20.314790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.908 [2024-07-15 20:40:20.314816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.908 [2024-07-15 20:40:20.314831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.908 [2024-07-15 20:40:20.314843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.908 [2024-07-15 20:40:20.314874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-15 20:40:20.324681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.908 [2024-07-15 20:40:20.324824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.908 [2024-07-15 20:40:20.324849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.908 [2024-07-15 20:40:20.324863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.908 [2024-07-15 20:40:20.324885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.908 [2024-07-15 20:40:20.324917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-15 20:40:20.334768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.908 [2024-07-15 20:40:20.334956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.908 [2024-07-15 20:40:20.334983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.908 [2024-07-15 20:40:20.335004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.908 [2024-07-15 20:40:20.335019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.908 [2024-07-15 20:40:20.335065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-15 20:40:20.344720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.908 [2024-07-15 20:40:20.344855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.908 [2024-07-15 20:40:20.344888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.908 [2024-07-15 20:40:20.344905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.908 [2024-07-15 20:40:20.344918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.908 [2024-07-15 20:40:20.344948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-15 20:40:20.354777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.908 [2024-07-15 20:40:20.354935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.908 [2024-07-15 20:40:20.354961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.908 [2024-07-15 20:40:20.354976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.908 [2024-07-15 20:40:20.354989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.908 [2024-07-15 20:40:20.355033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-15 20:40:20.364820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.908 [2024-07-15 20:40:20.364962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.908 [2024-07-15 20:40:20.364988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.908 [2024-07-15 20:40:20.365002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.908 [2024-07-15 20:40:20.365015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.908 [2024-07-15 20:40:20.365044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-15 20:40:20.374833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.908 [2024-07-15 20:40:20.374983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.908 [2024-07-15 20:40:20.375009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.908 [2024-07-15 20:40:20.375023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.908 [2024-07-15 20:40:20.375036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.908 [2024-07-15 20:40:20.375065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-15 20:40:20.384870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.908 [2024-07-15 20:40:20.385042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.908 [2024-07-15 20:40:20.385067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.908 [2024-07-15 20:40:20.385082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.908 [2024-07-15 20:40:20.385094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.908 [2024-07-15 20:40:20.385124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-15 20:40:20.394974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.908 [2024-07-15 20:40:20.395115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.908 [2024-07-15 20:40:20.395140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.908 [2024-07-15 20:40:20.395154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.908 [2024-07-15 20:40:20.395166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.908 [2024-07-15 20:40:20.395195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.908 qpair failed and we were unable to recover it. 00:34:41.908 [2024-07-15 20:40:20.405015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.908 [2024-07-15 20:40:20.405165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.909 [2024-07-15 20:40:20.405190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.909 [2024-07-15 20:40:20.405204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.909 [2024-07-15 20:40:20.405217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.909 [2024-07-15 20:40:20.405247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-15 20:40:20.414971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.909 [2024-07-15 20:40:20.415117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.909 [2024-07-15 20:40:20.415142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.909 [2024-07-15 20:40:20.415156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.909 [2024-07-15 20:40:20.415169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.909 [2024-07-15 20:40:20.415199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-15 20:40:20.424982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.909 [2024-07-15 20:40:20.425123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.909 [2024-07-15 20:40:20.425148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.909 [2024-07-15 20:40:20.425168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.909 [2024-07-15 20:40:20.425182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.909 [2024-07-15 20:40:20.425211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.909 qpair failed and we were unable to recover it. 00:34:41.909 [2024-07-15 20:40:20.435003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.909 [2024-07-15 20:40:20.435153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.909 [2024-07-15 20:40:20.435178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.909 [2024-07-15 20:40:20.435191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.909 [2024-07-15 20:40:20.435205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:41.909 [2024-07-15 20:40:20.435235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.909 qpair failed and we were unable to recover it. 00:34:42.167 [2024-07-15 20:40:20.445058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.167 [2024-07-15 20:40:20.445254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.167 [2024-07-15 20:40:20.445278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.167 [2024-07-15 20:40:20.445292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.167 [2024-07-15 20:40:20.445304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:42.167 [2024-07-15 20:40:20.445335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.167 qpair failed and we were unable to recover it. 00:34:42.167 [2024-07-15 20:40:20.455058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.167 [2024-07-15 20:40:20.455224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.167 [2024-07-15 20:40:20.455249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.167 [2024-07-15 20:40:20.455264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.167 [2024-07-15 20:40:20.455276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:42.167 [2024-07-15 20:40:20.455305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.167 qpair failed and we were unable to recover it. 00:34:42.167 [2024-07-15 20:40:20.465078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.167 [2024-07-15 20:40:20.465228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.167 [2024-07-15 20:40:20.465253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.167 [2024-07-15 20:40:20.465266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.167 [2024-07-15 20:40:20.465279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:42.167 [2024-07-15 20:40:20.465309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.167 qpair failed and we were unable to recover it. 00:34:42.167 [2024-07-15 20:40:20.475156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.167 [2024-07-15 20:40:20.475300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.167 [2024-07-15 20:40:20.475325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.167 [2024-07-15 20:40:20.475339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.167 [2024-07-15 20:40:20.475352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:42.167 [2024-07-15 20:40:20.475381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.167 qpair failed and we were unable to recover it. 00:34:42.167 [2024-07-15 20:40:20.485188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.167 [2024-07-15 20:40:20.485373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.167 [2024-07-15 20:40:20.485398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.167 [2024-07-15 20:40:20.485412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.167 [2024-07-15 20:40:20.485424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:42.167 [2024-07-15 20:40:20.485452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.167 qpair failed and we were unable to recover it. 00:34:42.167 [2024-07-15 20:40:20.495168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.167 [2024-07-15 20:40:20.495309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.167 [2024-07-15 20:40:20.495334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.167 [2024-07-15 20:40:20.495348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.167 [2024-07-15 20:40:20.495361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:42.167 [2024-07-15 20:40:20.495389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.167 qpair failed and we were unable to recover it. 00:34:42.167 [2024-07-15 20:40:20.505204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.167 [2024-07-15 20:40:20.505343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.167 [2024-07-15 20:40:20.505368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.167 [2024-07-15 20:40:20.505382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.167 [2024-07-15 20:40:20.505395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:42.167 [2024-07-15 20:40:20.505424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.167 qpair failed and we were unable to recover it. 00:34:42.167 [2024-07-15 20:40:20.515244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.167 [2024-07-15 20:40:20.515381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.167 [2024-07-15 20:40:20.515412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.167 [2024-07-15 20:40:20.515427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.167 [2024-07-15 20:40:20.515440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:42.167 [2024-07-15 20:40:20.515469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.167 qpair failed and we were unable to recover it. 00:34:42.167 [2024-07-15 20:40:20.525264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.167 [2024-07-15 20:40:20.525412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.167 [2024-07-15 20:40:20.525436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.167 [2024-07-15 20:40:20.525450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.167 [2024-07-15 20:40:20.525463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:42.167 [2024-07-15 20:40:20.525492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.167 qpair failed and we were unable to recover it. 00:34:42.167 [2024-07-15 20:40:20.535348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.167 [2024-07-15 20:40:20.535491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.167 [2024-07-15 20:40:20.535516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.167 [2024-07-15 20:40:20.535529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.167 [2024-07-15 20:40:20.535542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:42.167 [2024-07-15 20:40:20.535572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.167 qpair failed and we were unable to recover it. 00:34:42.167 [2024-07-15 20:40:20.545443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.167 [2024-07-15 20:40:20.545582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.167 [2024-07-15 20:40:20.545608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.167 [2024-07-15 20:40:20.545624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.167 [2024-07-15 20:40:20.545637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d64000b90 00:34:42.167 [2024-07-15 20:40:20.545666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.167 qpair failed and we were unable to recover it. 00:34:42.167 [2024-07-15 20:40:20.555367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.167 [2024-07-15 20:40:20.555515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.167 [2024-07-15 20:40:20.555547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.167 [2024-07-15 20:40:20.555563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.167 [2024-07-15 20:40:20.555577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1af5600 00:34:42.167 [2024-07-15 20:40:20.555613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.167 qpair failed and we were unable to recover it. 00:34:42.167 [2024-07-15 20:40:20.565389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.167 [2024-07-15 20:40:20.565547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.167 [2024-07-15 20:40:20.565574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.167 [2024-07-15 20:40:20.565589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.167 [2024-07-15 20:40:20.565602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1af5600 00:34:42.167 [2024-07-15 20:40:20.565632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.168 qpair failed and we were unable to recover it. 00:34:42.168 [2024-07-15 20:40:20.575408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.168 [2024-07-15 20:40:20.575585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.168 [2024-07-15 20:40:20.575618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.168 [2024-07-15 20:40:20.575634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.168 [2024-07-15 20:40:20.575647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d54000b90 00:34:42.168 [2024-07-15 20:40:20.575681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:42.168 qpair failed and we were unable to recover it. 00:34:42.168 [2024-07-15 20:40:20.585497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.168 [2024-07-15 20:40:20.585690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.168 [2024-07-15 20:40:20.585718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.168 [2024-07-15 20:40:20.585733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.168 [2024-07-15 20:40:20.585747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d54000b90 00:34:42.168 [2024-07-15 20:40:20.585777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:42.168 qpair failed and we were unable to recover it. 00:34:42.168 [2024-07-15 20:40:20.595455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.168 [2024-07-15 20:40:20.595610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.168 [2024-07-15 20:40:20.595642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.168 [2024-07-15 20:40:20.595658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.168 [2024-07-15 20:40:20.595671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d5c000b90 00:34:42.168 [2024-07-15 20:40:20.595704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.168 qpair failed and we were unable to recover it. 00:34:42.168 [2024-07-15 20:40:20.605588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.168 [2024-07-15 20:40:20.605744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.168 [2024-07-15 20:40:20.605776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.168 [2024-07-15 20:40:20.605795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.168 [2024-07-15 20:40:20.605809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3d5c000b90 00:34:42.168 [2024-07-15 20:40:20.605841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:42.168 qpair failed and we were unable to recover it. 00:34:42.168 [2024-07-15 20:40:20.605950] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:42.168 A controller has encountered a failure and is being reset. 00:34:42.168 [2024-07-15 20:40:20.606008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b035b0 (9): Bad file descriptor 00:34:42.426 Controller properly reset. 00:34:42.426 Initializing NVMe Controllers 00:34:42.426 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:42.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:42.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:42.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:42.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:42.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:42.426 Initialization complete. Launching workers. 00:34:42.426 Starting thread on core 1 00:34:42.427 Starting thread on core 2 00:34:42.427 Starting thread on core 3 00:34:42.427 Starting thread on core 0 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:42.427 00:34:42.427 real 0m10.831s 00:34:42.427 user 0m17.817s 00:34:42.427 sys 0m5.602s 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:42.427 ************************************ 00:34:42.427 END TEST nvmf_target_disconnect_tc2 00:34:42.427 ************************************ 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:42.427 rmmod nvme_tcp 00:34:42.427 rmmod nvme_fabrics 00:34:42.427 rmmod nvme_keyring 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 16592 ']' 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 16592 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 16592 ']' 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 16592 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 16592 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 16592' 00:34:42.427 killing process with pid 16592 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 16592 00:34:42.427 20:40:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 16592 00:34:42.685 20:40:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:42.685 20:40:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:42.685 20:40:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:42.686 20:40:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:42.686 20:40:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:42.686 20:40:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.686 20:40:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:42.686 20:40:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.588 20:40:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:44.588 00:34:44.588 real 0m15.665s 00:34:44.588 user 0m44.098s 00:34:44.588 sys 0m7.608s 00:34:44.588 20:40:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:44.588 20:40:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:44.588 ************************************ 00:34:44.588 END TEST nvmf_target_disconnect 00:34:44.588 ************************************ 00:34:44.847 20:40:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:44.847 20:40:23 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:44.847 20:40:23 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:44.847 20:40:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.847 20:40:23 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:44.847 00:34:44.847 real 27m8.690s 00:34:44.847 user 73m53.390s 00:34:44.847 sys 6m25.154s 00:34:44.847 20:40:23 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:44.847 20:40:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.847 ************************************ 00:34:44.847 END TEST nvmf_tcp 00:34:44.847 ************************************ 00:34:44.847 20:40:23 -- common/autotest_common.sh@1142 -- # return 0 00:34:44.847 20:40:23 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:44.847 20:40:23 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:44.847 20:40:23 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:44.847 20:40:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:44.847 20:40:23 -- common/autotest_common.sh@10 -- # set +x 00:34:44.847 ************************************ 00:34:44.847 START TEST spdkcli_nvmf_tcp 00:34:44.847 ************************************ 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:44.847 * Looking for test storage... 00:34:44.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:44.847 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=17788 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 17788 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 17788 ']' 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:44.848 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.848 [2024-07-15 20:40:23.311626] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:34:44.848 [2024-07-15 20:40:23.311707] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17788 ] 00:34:44.848 EAL: No free 2048 kB hugepages reported on node 1 00:34:44.848 [2024-07-15 20:40:23.369222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:45.106 [2024-07-15 20:40:23.460793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.106 [2024-07-15 20:40:23.460796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.106 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:45.106 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:34:45.106 20:40:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:45.106 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:45.106 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.106 20:40:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:45.106 20:40:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:45.106 20:40:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:45.106 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:45.106 20:40:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.106 20:40:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:45.106 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:45.106 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:45.106 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:45.106 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:45.106 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:45.106 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:45.106 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:45.106 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:45.106 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:45.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:45.106 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:45.106 ' 00:34:47.633 [2024-07-15 20:40:26.162914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:49.006 [2024-07-15 20:40:27.383126] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:51.531 [2024-07-15 20:40:29.678481] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:53.425 [2024-07-15 20:40:31.624604] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:54.797 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:54.797 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:54.797 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:54.797 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:54.797 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:54.797 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:54.797 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:54.797 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:54.797 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:54.797 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:54.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:54.797 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:54.797 20:40:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:54.797 20:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:54.797 20:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.797 20:40:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:54.797 20:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:54.797 20:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.797 20:40:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:54.797 20:40:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:55.362 20:40:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:55.362 20:40:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:55.362 20:40:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:55.362 20:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:55.362 20:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:55.362 20:40:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:55.362 20:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:55.362 20:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:55.362 20:40:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:55.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:55.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:55.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:55.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:55.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:55.362 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:55.362 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:55.362 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:55.362 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:55.362 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:55.362 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:55.362 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:55.362 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:55.362 ' 00:35:00.635 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:00.635 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:00.635 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:00.635 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:00.635 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:00.635 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:00.635 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:00.635 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:00.635 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:00.635 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:00.635 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:00.635 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:00.635 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:00.635 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 17788 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 17788 ']' 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 17788 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 17788 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 17788' 00:35:00.635 killing process with pid 17788 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 17788 00:35:00.635 20:40:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 17788 00:35:00.895 20:40:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:00.895 20:40:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:00.895 20:40:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 17788 ']' 00:35:00.895 20:40:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 17788 00:35:00.895 20:40:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 17788 ']' 00:35:00.895 20:40:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 17788 00:35:00.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (17788) - No such process 00:35:00.895 20:40:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 17788 is not found' 00:35:00.895 Process with pid 17788 is not found 00:35:00.895 20:40:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:00.895 20:40:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:00.895 20:40:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:00.895 00:35:00.895 real 0m15.983s 00:35:00.895 user 0m33.765s 00:35:00.895 sys 0m0.841s 00:35:00.895 20:40:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:00.895 20:40:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.895 ************************************ 00:35:00.895 END TEST spdkcli_nvmf_tcp 00:35:00.895 ************************************ 00:35:00.895 20:40:39 -- common/autotest_common.sh@1142 -- # return 0 00:35:00.895 20:40:39 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:00.895 20:40:39 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:00.895 20:40:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:00.895 20:40:39 -- common/autotest_common.sh@10 -- # set +x 00:35:00.895 ************************************ 00:35:00.895 START TEST nvmf_identify_passthru 00:35:00.895 ************************************ 00:35:00.895 20:40:39 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:00.895 * Looking for test storage... 00:35:00.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:00.895 20:40:39 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.895 20:40:39 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.895 20:40:39 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.895 20:40:39 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.895 20:40:39 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.895 20:40:39 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.895 20:40:39 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.895 20:40:39 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:00.895 20:40:39 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:00.895 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:00.896 20:40:39 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.896 20:40:39 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.896 20:40:39 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.896 20:40:39 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.896 20:40:39 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.896 20:40:39 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.896 20:40:39 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.896 20:40:39 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:00.896 20:40:39 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.896 20:40:39 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.896 20:40:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:00.896 20:40:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:00.896 20:40:39 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:35:00.896 20:40:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:02.797 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:02.797 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:02.797 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.797 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:02.798 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:02.798 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:03.056 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:03.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:03.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:35:03.056 00:35:03.056 --- 10.0.0.2 ping statistics --- 00:35:03.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.056 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:35:03.056 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:03.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:03.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:35:03.056 00:35:03.056 --- 10.0.0.1 ping statistics --- 00:35:03.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.056 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:35:03.056 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:03.056 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:35:03.056 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:03.056 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:03.056 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:03.056 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:03.056 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:03.056 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:03.056 20:40:41 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:03.056 20:40:41 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:03.056 20:40:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:35:03.056 20:40:41 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:35:03.056 20:40:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:35:03.056 20:40:41 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:35:03.056 20:40:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:03.056 20:40:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:03.056 20:40:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:03.056 EAL: No free 2048 kB hugepages reported on node 1 00:35:07.236 20:40:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:35:07.236 20:40:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:07.236 20:40:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:07.236 20:40:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:07.237 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.419 20:40:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:11.419 20:40:49 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:11.419 20:40:49 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:11.419 20:40:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.419 20:40:49 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:11.419 20:40:49 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:11.419 20:40:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.419 20:40:49 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=22279 00:35:11.419 20:40:49 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:11.419 20:40:49 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:11.419 20:40:49 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 22279 00:35:11.419 20:40:49 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 22279 ']' 00:35:11.419 20:40:49 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.419 20:40:49 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:11.419 20:40:49 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.419 20:40:49 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:11.419 20:40:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.419 [2024-07-15 20:40:49.932309] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:35:11.419 [2024-07-15 20:40:49.932386] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:11.678 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.678 [2024-07-15 20:40:49.998757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:11.678 [2024-07-15 20:40:50.096645] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:11.678 [2024-07-15 20:40:50.096698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:11.678 [2024-07-15 20:40:50.096727] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:11.678 [2024-07-15 20:40:50.096745] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:11.678 [2024-07-15 20:40:50.096755] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:11.678 [2024-07-15 20:40:50.096852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.678 [2024-07-15 20:40:50.096916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:11.678 [2024-07-15 20:40:50.096983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:11.678 [2024-07-15 20:40:50.096986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.678 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:11.678 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:35:11.678 20:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:11.678 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.678 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.678 INFO: Log level set to 20 00:35:11.678 INFO: Requests: 00:35:11.678 { 00:35:11.678 "jsonrpc": "2.0", 00:35:11.678 "method": "nvmf_set_config", 00:35:11.678 "id": 1, 00:35:11.678 "params": { 00:35:11.678 "admin_cmd_passthru": { 00:35:11.678 "identify_ctrlr": true 00:35:11.678 } 00:35:11.678 } 00:35:11.678 } 00:35:11.678 00:35:11.678 INFO: response: 00:35:11.678 { 00:35:11.678 "jsonrpc": "2.0", 00:35:11.678 "id": 1, 00:35:11.678 "result": true 00:35:11.678 } 00:35:11.678 00:35:11.678 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.678 20:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:11.678 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.678 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.678 INFO: Setting log level to 20 00:35:11.678 INFO: Setting log level to 20 00:35:11.678 INFO: Log level set to 20 00:35:11.678 INFO: Log level set to 20 00:35:11.678 INFO: Requests: 00:35:11.678 { 00:35:11.678 "jsonrpc": "2.0", 00:35:11.678 "method": "framework_start_init", 00:35:11.678 "id": 1 00:35:11.678 } 00:35:11.678 00:35:11.678 INFO: Requests: 00:35:11.678 { 00:35:11.678 "jsonrpc": "2.0", 00:35:11.678 "method": "framework_start_init", 00:35:11.678 "id": 1 00:35:11.678 } 00:35:11.678 00:35:11.936 [2024-07-15 20:40:50.272074] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:11.936 INFO: response: 00:35:11.936 { 00:35:11.936 "jsonrpc": "2.0", 00:35:11.936 "id": 1, 00:35:11.936 "result": true 00:35:11.936 } 00:35:11.936 00:35:11.936 INFO: response: 00:35:11.936 { 00:35:11.936 "jsonrpc": "2.0", 00:35:11.936 "id": 1, 00:35:11.936 "result": true 00:35:11.936 } 00:35:11.936 00:35:11.936 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.936 20:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:11.936 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.936 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.936 INFO: Setting log level to 40 00:35:11.936 INFO: Setting log level to 40 00:35:11.936 INFO: Setting log level to 40 00:35:11.936 [2024-07-15 20:40:50.282083] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.936 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.936 20:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:11.936 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:11.936 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.936 20:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:11.936 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.936 20:40:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.216 Nvme0n1 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.216 [2024-07-15 20:40:53.166306] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.216 [ 00:35:15.216 { 00:35:15.216 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:15.216 "subtype": "Discovery", 00:35:15.216 "listen_addresses": [], 00:35:15.216 "allow_any_host": true, 00:35:15.216 "hosts": [] 00:35:15.216 }, 00:35:15.216 { 00:35:15.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:15.216 "subtype": "NVMe", 00:35:15.216 "listen_addresses": [ 00:35:15.216 { 00:35:15.216 "trtype": "TCP", 00:35:15.216 "adrfam": "IPv4", 00:35:15.216 "traddr": "10.0.0.2", 00:35:15.216 "trsvcid": "4420" 00:35:15.216 } 00:35:15.216 ], 00:35:15.216 "allow_any_host": true, 00:35:15.216 "hosts": [], 00:35:15.216 "serial_number": "SPDK00000000000001", 00:35:15.216 "model_number": "SPDK bdev Controller", 00:35:15.216 "max_namespaces": 1, 00:35:15.216 "min_cntlid": 1, 00:35:15.216 "max_cntlid": 65519, 00:35:15.216 "namespaces": [ 00:35:15.216 { 00:35:15.216 "nsid": 1, 00:35:15.216 "bdev_name": "Nvme0n1", 00:35:15.216 "name": "Nvme0n1", 00:35:15.216 "nguid": "166AC10DB8324651ABA2254ADCAADC13", 00:35:15.216 "uuid": "166ac10d-b832-4651-aba2-254adcaadc13" 00:35:15.216 } 00:35:15.216 ] 00:35:15.216 } 00:35:15.216 ] 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:15.216 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:15.216 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:15.216 20:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:15.216 20:40:53 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:15.216 20:40:53 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:15.216 20:40:53 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:15.216 20:40:53 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:15.216 20:40:53 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:15.216 20:40:53 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:15.216 rmmod nvme_tcp 00:35:15.216 rmmod nvme_fabrics 00:35:15.216 rmmod nvme_keyring 00:35:15.216 20:40:53 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:15.216 20:40:53 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:15.216 20:40:53 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:15.216 20:40:53 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 22279 ']' 00:35:15.216 20:40:53 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 22279 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 22279 ']' 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 22279 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 22279 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 22279' 00:35:15.216 killing process with pid 22279 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 22279 00:35:15.216 20:40:53 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 22279 00:35:17.112 20:40:55 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:17.112 20:40:55 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:17.112 20:40:55 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:17.112 20:40:55 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:17.112 20:40:55 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:17.112 20:40:55 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.112 20:40:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:17.112 20:40:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.014 20:40:57 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:19.014 00:35:19.014 real 0m17.937s 00:35:19.014 user 0m26.471s 00:35:19.014 sys 0m2.348s 00:35:19.014 20:40:57 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:19.014 20:40:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.014 ************************************ 00:35:19.014 END TEST nvmf_identify_passthru 00:35:19.014 ************************************ 00:35:19.014 20:40:57 -- common/autotest_common.sh@1142 -- # return 0 00:35:19.014 20:40:57 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:19.014 20:40:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:19.014 20:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:19.014 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:35:19.014 ************************************ 00:35:19.014 START TEST nvmf_dif 00:35:19.014 ************************************ 00:35:19.014 20:40:57 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:19.014 * Looking for test storage... 00:35:19.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:19.014 20:40:57 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.014 20:40:57 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.014 20:40:57 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.014 20:40:57 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.014 20:40:57 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.014 20:40:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.014 20:40:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.014 20:40:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.014 20:40:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:19.015 20:40:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:19.015 20:40:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:19.015 20:40:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:19.015 20:40:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:19.015 20:40:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:19.015 20:40:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.015 20:40:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:19.015 20:40:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:19.015 20:40:57 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:19.015 20:40:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:20.915 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:20.915 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:20.915 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:20.915 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:20.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:20.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:35:20.915 00:35:20.915 --- 10.0.0.2 ping statistics --- 00:35:20.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:20.915 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:20.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:20.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:35:20.915 00:35:20.915 --- 10.0.0.1 ping statistics --- 00:35:20.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:20.915 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:20.915 20:40:59 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:20.916 20:40:59 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:22.287 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:22.287 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:22.287 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:22.287 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:22.287 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:22.287 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:22.287 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:22.287 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:22.287 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:22.287 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:22.287 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:22.287 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:22.287 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:22.287 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:22.287 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:22.287 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:22.287 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:22.287 20:41:00 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:22.287 20:41:00 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:22.287 20:41:00 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:22.287 20:41:00 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:22.287 20:41:00 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:22.287 20:41:00 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:22.287 20:41:00 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:22.287 20:41:00 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:22.287 20:41:00 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:22.287 20:41:00 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:22.287 20:41:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:22.287 20:41:00 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=25539 00:35:22.287 20:41:00 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:22.287 20:41:00 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 25539 00:35:22.287 20:41:00 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 25539 ']' 00:35:22.287 20:41:00 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:22.287 20:41:00 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:22.287 20:41:00 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:22.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:22.287 20:41:00 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:22.287 20:41:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:22.287 [2024-07-15 20:41:00.748969] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:35:22.287 [2024-07-15 20:41:00.749061] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:22.287 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.287 [2024-07-15 20:41:00.814222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.545 [2024-07-15 20:41:00.904124] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:22.545 [2024-07-15 20:41:00.904197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:22.545 [2024-07-15 20:41:00.904210] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:22.545 [2024-07-15 20:41:00.904222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:22.545 [2024-07-15 20:41:00.904231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:22.545 [2024-07-15 20:41:00.904269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.545 20:41:01 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:22.545 20:41:01 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:35:22.545 20:41:01 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:22.545 20:41:01 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:22.545 20:41:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:22.545 20:41:01 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:22.545 20:41:01 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:22.545 20:41:01 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:22.545 20:41:01 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.545 20:41:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:22.545 [2024-07-15 20:41:01.053886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:22.545 20:41:01 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.545 20:41:01 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:22.545 20:41:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:22.545 20:41:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:22.545 20:41:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:22.803 ************************************ 00:35:22.803 START TEST fio_dif_1_default 00:35:22.803 ************************************ 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:22.803 bdev_null0 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:22.803 [2024-07-15 20:41:01.118235] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:22.803 { 00:35:22.803 "params": { 00:35:22.803 "name": "Nvme$subsystem", 00:35:22.803 "trtype": "$TEST_TRANSPORT", 00:35:22.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.803 "adrfam": "ipv4", 00:35:22.803 "trsvcid": "$NVMF_PORT", 00:35:22.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.803 "hdgst": ${hdgst:-false}, 00:35:22.803 "ddgst": ${ddgst:-false} 00:35:22.803 }, 00:35:22.803 "method": "bdev_nvme_attach_controller" 00:35:22.803 } 00:35:22.803 EOF 00:35:22.803 )") 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:22.803 "params": { 00:35:22.803 "name": "Nvme0", 00:35:22.803 "trtype": "tcp", 00:35:22.803 "traddr": "10.0.0.2", 00:35:22.803 "adrfam": "ipv4", 00:35:22.803 "trsvcid": "4420", 00:35:22.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:22.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:22.803 "hdgst": false, 00:35:22.803 "ddgst": false 00:35:22.803 }, 00:35:22.803 "method": "bdev_nvme_attach_controller" 00:35:22.803 }' 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:22.803 20:41:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.061 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:23.061 fio-3.35 00:35:23.061 Starting 1 thread 00:35:23.061 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.266 00:35:35.266 filename0: (groupid=0, jobs=1): err= 0: pid=25765: Mon Jul 15 20:41:11 2024 00:35:35.266 read: IOPS=188, BW=752KiB/s (770kB/s)(7536KiB/10021msec) 00:35:35.266 slat (nsec): min=4540, max=65271, avg=8787.02, stdev=3600.92 00:35:35.266 clat (usec): min=887, max=47066, avg=21246.71, stdev=20235.49 00:35:35.266 lat (usec): min=894, max=47094, avg=21255.50, stdev=20235.46 00:35:35.266 clat percentiles (usec): 00:35:35.266 | 1.00th=[ 906], 5.00th=[ 922], 10.00th=[ 930], 20.00th=[ 947], 00:35:35.266 | 30.00th=[ 971], 40.00th=[ 1037], 50.00th=[41157], 60.00th=[41157], 00:35:35.266 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:35:35.266 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:35:35.266 | 99.99th=[46924] 00:35:35.266 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=752.00, stdev=28.43, samples=20 00:35:35.266 iops : min= 176, max= 192, avg=188.00, stdev= 7.11, samples=20 00:35:35.266 lat (usec) : 1000=35.83% 00:35:35.266 lat (msec) : 2=14.07%, 50=50.11% 00:35:35.266 cpu : usr=88.65%, sys=11.04%, ctx=24, majf=0, minf=277 00:35:35.266 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.266 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.266 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:35.266 00:35:35.266 Run status group 0 (all jobs): 00:35:35.266 READ: bw=752KiB/s (770kB/s), 752KiB/s-752KiB/s (770kB/s-770kB/s), io=7536KiB (7717kB), run=10021-10021msec 00:35:35.266 20:41:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:35.266 20:41:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:35.266 20:41:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:35.266 20:41:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:35.266 20:41:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.267 00:35:35.267 real 0m11.124s 00:35:35.267 user 0m10.001s 00:35:35.267 sys 0m1.369s 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:35.267 ************************************ 00:35:35.267 END TEST fio_dif_1_default 00:35:35.267 ************************************ 00:35:35.267 20:41:12 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:35.267 20:41:12 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:35.267 20:41:12 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:35.267 20:41:12 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:35.267 20:41:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:35.267 ************************************ 00:35:35.267 START TEST fio_dif_1_multi_subsystems 00:35:35.267 ************************************ 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.267 bdev_null0 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.267 [2024-07-15 20:41:12.283468] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.267 bdev_null1 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:35.267 { 00:35:35.267 "params": { 00:35:35.267 "name": "Nvme$subsystem", 00:35:35.267 "trtype": "$TEST_TRANSPORT", 00:35:35.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:35.267 "adrfam": "ipv4", 00:35:35.267 "trsvcid": "$NVMF_PORT", 00:35:35.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:35.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:35.267 "hdgst": ${hdgst:-false}, 00:35:35.267 "ddgst": ${ddgst:-false} 00:35:35.267 }, 00:35:35.267 "method": "bdev_nvme_attach_controller" 00:35:35.267 } 00:35:35.267 EOF 00:35:35.267 )") 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:35.267 { 00:35:35.267 "params": { 00:35:35.267 "name": "Nvme$subsystem", 00:35:35.267 "trtype": "$TEST_TRANSPORT", 00:35:35.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:35.267 "adrfam": "ipv4", 00:35:35.267 "trsvcid": "$NVMF_PORT", 00:35:35.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:35.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:35.267 "hdgst": ${hdgst:-false}, 00:35:35.267 "ddgst": ${ddgst:-false} 00:35:35.267 }, 00:35:35.267 "method": "bdev_nvme_attach_controller" 00:35:35.267 } 00:35:35.267 EOF 00:35:35.267 )") 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:35.267 "params": { 00:35:35.267 "name": "Nvme0", 00:35:35.267 "trtype": "tcp", 00:35:35.267 "traddr": "10.0.0.2", 00:35:35.267 "adrfam": "ipv4", 00:35:35.267 "trsvcid": "4420", 00:35:35.267 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:35.267 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:35.267 "hdgst": false, 00:35:35.267 "ddgst": false 00:35:35.267 }, 00:35:35.267 "method": "bdev_nvme_attach_controller" 00:35:35.267 },{ 00:35:35.267 "params": { 00:35:35.267 "name": "Nvme1", 00:35:35.267 "trtype": "tcp", 00:35:35.267 "traddr": "10.0.0.2", 00:35:35.267 "adrfam": "ipv4", 00:35:35.267 "trsvcid": "4420", 00:35:35.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:35.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:35.267 "hdgst": false, 00:35:35.267 "ddgst": false 00:35:35.267 }, 00:35:35.267 "method": "bdev_nvme_attach_controller" 00:35:35.267 }' 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:35.267 20:41:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:35.267 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:35.267 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:35.267 fio-3.35 00:35:35.267 Starting 2 threads 00:35:35.267 EAL: No free 2048 kB hugepages reported on node 1 00:35:45.262 00:35:45.262 filename0: (groupid=0, jobs=1): err= 0: pid=27160: Mon Jul 15 20:41:23 2024 00:35:45.262 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10017msec) 00:35:45.262 slat (nsec): min=6959, max=29910, avg=10306.57, stdev=4958.55 00:35:45.262 clat (usec): min=40835, max=42570, avg=41531.24, stdev=494.40 00:35:45.262 lat (usec): min=40842, max=42589, avg=41541.55, stdev=494.70 00:35:45.262 clat percentiles (usec): 00:35:45.262 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:45.262 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:35:45.262 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:45.262 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:45.262 | 99.99th=[42730] 00:35:45.262 bw ( KiB/s): min= 352, max= 416, per=49.77%, avg=384.00, stdev=14.68, samples=20 00:35:45.262 iops : min= 88, max= 104, avg=96.00, stdev= 3.67, samples=20 00:35:45.262 lat (msec) : 50=100.00% 00:35:45.262 cpu : usr=94.22%, sys=5.49%, ctx=14, majf=0, minf=146 00:35:45.262 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:45.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.262 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.262 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:45.262 filename1: (groupid=0, jobs=1): err= 0: pid=27161: Mon Jul 15 20:41:23 2024 00:35:45.262 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10015msec) 00:35:45.262 slat (nsec): min=7024, max=54044, avg=9392.25, stdev=3821.48 00:35:45.262 clat (usec): min=40907, max=42991, avg=41353.47, stdev=494.06 00:35:45.262 lat (usec): min=40914, max=43004, avg=41362.86, stdev=494.66 00:35:45.262 clat percentiles (usec): 00:35:45.262 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:45.262 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:45.262 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:45.262 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:35:45.262 | 99.99th=[43254] 00:35:45.262 bw ( KiB/s): min= 384, max= 416, per=49.90%, avg=385.60, stdev= 7.16, samples=20 00:35:45.262 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:35:45.262 lat (msec) : 50=100.00% 00:35:45.262 cpu : usr=93.84%, sys=5.87%, ctx=18, majf=0, minf=102 00:35:45.262 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:45.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.262 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.262 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:45.262 00:35:45.262 Run status group 0 (all jobs): 00:35:45.262 READ: bw=771KiB/s (790kB/s), 385KiB/s-387KiB/s (394kB/s-396kB/s), io=7728KiB (7913kB), run=10015-10017msec 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.262 00:35:45.262 real 0m11.322s 00:35:45.262 user 0m20.183s 00:35:45.262 sys 0m1.458s 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:45.262 20:41:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.262 ************************************ 00:35:45.262 END TEST fio_dif_1_multi_subsystems 00:35:45.262 ************************************ 00:35:45.262 20:41:23 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:45.262 20:41:23 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:45.262 20:41:23 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:45.262 20:41:23 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:45.262 20:41:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:45.262 ************************************ 00:35:45.262 START TEST fio_dif_rand_params 00:35:45.262 ************************************ 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.262 bdev_null0 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.262 [2024-07-15 20:41:23.647715] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:45.262 20:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:45.263 { 00:35:45.263 "params": { 00:35:45.263 "name": "Nvme$subsystem", 00:35:45.263 "trtype": "$TEST_TRANSPORT", 00:35:45.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.263 "adrfam": "ipv4", 00:35:45.263 "trsvcid": "$NVMF_PORT", 00:35:45.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.263 "hdgst": ${hdgst:-false}, 00:35:45.263 "ddgst": ${ddgst:-false} 00:35:45.263 }, 00:35:45.263 "method": "bdev_nvme_attach_controller" 00:35:45.263 } 00:35:45.263 EOF 00:35:45.263 )") 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:45.263 "params": { 00:35:45.263 "name": "Nvme0", 00:35:45.263 "trtype": "tcp", 00:35:45.263 "traddr": "10.0.0.2", 00:35:45.263 "adrfam": "ipv4", 00:35:45.263 "trsvcid": "4420", 00:35:45.263 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:45.263 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:45.263 "hdgst": false, 00:35:45.263 "ddgst": false 00:35:45.263 }, 00:35:45.263 "method": "bdev_nvme_attach_controller" 00:35:45.263 }' 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:45.263 20:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.520 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:45.520 ... 00:35:45.520 fio-3.35 00:35:45.520 Starting 3 threads 00:35:45.520 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.078 00:35:52.078 filename0: (groupid=0, jobs=1): err= 0: pid=28559: Mon Jul 15 20:41:29 2024 00:35:52.078 read: IOPS=150, BW=18.8MiB/s (19.7MB/s)(94.9MiB/5047msec) 00:35:52.078 slat (nsec): min=7455, max=42317, avg=13083.62, stdev=4021.07 00:35:52.078 clat (usec): min=6392, max=90587, avg=19816.96, stdev=16386.41 00:35:52.078 lat (usec): min=6404, max=90601, avg=19830.04, stdev=16386.43 00:35:52.078 clat percentiles (usec): 00:35:52.078 | 1.00th=[ 7177], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[10159], 00:35:52.078 | 30.00th=[10945], 40.00th=[11731], 50.00th=[13042], 60.00th=[14484], 00:35:52.078 | 70.00th=[15795], 80.00th=[17695], 90.00th=[52691], 95.00th=[54789], 00:35:52.078 | 99.00th=[56361], 99.50th=[56886], 99.90th=[90702], 99.95th=[90702], 00:35:52.078 | 99.99th=[90702] 00:35:52.078 bw ( KiB/s): min=14592, max=30464, per=28.73%, avg=19353.60, stdev=5307.43, samples=10 00:35:52.078 iops : min= 114, max= 238, avg=151.20, stdev=41.46, samples=10 00:35:52.078 lat (msec) : 10=18.58%, 20=62.71%, 50=0.66%, 100=18.05% 00:35:52.078 cpu : usr=91.64%, sys=7.77%, ctx=13, majf=0, minf=83 00:35:52.078 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.078 issued rwts: total=759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.078 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:52.078 filename0: (groupid=0, jobs=1): err= 0: pid=28560: Mon Jul 15 20:41:29 2024 00:35:52.079 read: IOPS=186, BW=23.3MiB/s (24.5MB/s)(117MiB/5005msec) 00:35:52.079 slat (nsec): min=7587, max=40322, avg=14797.75, stdev=4945.91 00:35:52.079 clat (usec): min=5815, max=94060, avg=16054.05, stdev=14752.80 00:35:52.079 lat (usec): min=5828, max=94074, avg=16068.85, stdev=14752.86 00:35:52.079 clat percentiles (usec): 00:35:52.079 | 1.00th=[ 6063], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 7832], 00:35:52.079 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[11994], 00:35:52.079 | 70.00th=[13566], 80.00th=[15139], 90.00th=[50594], 95.00th=[53740], 00:35:52.079 | 99.00th=[56886], 99.50th=[64750], 99.90th=[93848], 99.95th=[93848], 00:35:52.079 | 99.99th=[93848] 00:35:52.079 bw ( KiB/s): min=16896, max=32512, per=35.39%, avg=23837.30, stdev=5606.99, samples=10 00:35:52.079 iops : min= 132, max= 254, avg=186.20, stdev=43.83, samples=10 00:35:52.079 lat (msec) : 10=41.54%, 20=45.61%, 50=1.07%, 100=11.78% 00:35:52.079 cpu : usr=91.51%, sys=8.07%, ctx=13, majf=0, minf=44 00:35:52.079 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.079 issued rwts: total=934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.079 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:52.079 filename0: (groupid=0, jobs=1): err= 0: pid=28561: Mon Jul 15 20:41:29 2024 00:35:52.079 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(120MiB/5022msec) 00:35:52.079 slat (usec): min=7, max=213, avg=13.68, stdev= 8.12 00:35:52.079 clat (usec): min=5428, max=58540, avg=15623.75, stdev=13538.79 00:35:52.079 lat (usec): min=5441, max=58552, avg=15637.43, stdev=13538.52 00:35:52.079 clat percentiles (usec): 00:35:52.079 | 1.00th=[ 5932], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 8291], 00:35:52.079 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[11076], 60.00th=[12256], 00:35:52.079 | 70.00th=[13829], 80.00th=[15401], 90.00th=[50070], 95.00th=[52691], 00:35:52.079 | 99.00th=[56361], 99.50th=[56886], 99.90th=[58459], 99.95th=[58459], 00:35:52.079 | 99.99th=[58459] 00:35:52.079 bw ( KiB/s): min=14080, max=32256, per=36.49%, avg=24581.80, stdev=6494.42, samples=10 00:35:52.079 iops : min= 110, max= 252, avg=192.00, stdev=50.70, samples=10 00:35:52.079 lat (msec) : 10=37.80%, 20=50.99%, 50=1.25%, 100=9.97% 00:35:52.079 cpu : usr=91.20%, sys=8.31%, ctx=13, majf=0, minf=211 00:35:52.079 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.079 issued rwts: total=963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.079 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:52.079 00:35:52.079 Run status group 0 (all jobs): 00:35:52.079 READ: bw=65.8MiB/s (69.0MB/s), 18.8MiB/s-24.0MiB/s (19.7MB/s-25.1MB/s), io=332MiB (348MB), run=5005-5047msec 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 bdev_null0 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 [2024-07-15 20:41:29.807158] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 bdev_null1 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 bdev_null2 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:52.079 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:52.079 { 00:35:52.079 "params": { 00:35:52.079 "name": "Nvme$subsystem", 00:35:52.080 "trtype": "$TEST_TRANSPORT", 00:35:52.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:52.080 "adrfam": "ipv4", 00:35:52.080 "trsvcid": "$NVMF_PORT", 00:35:52.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:52.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:52.080 "hdgst": ${hdgst:-false}, 00:35:52.080 "ddgst": ${ddgst:-false} 00:35:52.080 }, 00:35:52.080 "method": "bdev_nvme_attach_controller" 00:35:52.080 } 00:35:52.080 EOF 00:35:52.080 )") 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:52.080 { 00:35:52.080 "params": { 00:35:52.080 "name": "Nvme$subsystem", 00:35:52.080 "trtype": "$TEST_TRANSPORT", 00:35:52.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:52.080 "adrfam": "ipv4", 00:35:52.080 "trsvcid": "$NVMF_PORT", 00:35:52.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:52.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:52.080 "hdgst": ${hdgst:-false}, 00:35:52.080 "ddgst": ${ddgst:-false} 00:35:52.080 }, 00:35:52.080 "method": "bdev_nvme_attach_controller" 00:35:52.080 } 00:35:52.080 EOF 00:35:52.080 )") 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:52.080 { 00:35:52.080 "params": { 00:35:52.080 "name": "Nvme$subsystem", 00:35:52.080 "trtype": "$TEST_TRANSPORT", 00:35:52.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:52.080 "adrfam": "ipv4", 00:35:52.080 "trsvcid": "$NVMF_PORT", 00:35:52.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:52.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:52.080 "hdgst": ${hdgst:-false}, 00:35:52.080 "ddgst": ${ddgst:-false} 00:35:52.080 }, 00:35:52.080 "method": "bdev_nvme_attach_controller" 00:35:52.080 } 00:35:52.080 EOF 00:35:52.080 )") 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:52.080 "params": { 00:35:52.080 "name": "Nvme0", 00:35:52.080 "trtype": "tcp", 00:35:52.080 "traddr": "10.0.0.2", 00:35:52.080 "adrfam": "ipv4", 00:35:52.080 "trsvcid": "4420", 00:35:52.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:52.080 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:52.080 "hdgst": false, 00:35:52.080 "ddgst": false 00:35:52.080 }, 00:35:52.080 "method": "bdev_nvme_attach_controller" 00:35:52.080 },{ 00:35:52.080 "params": { 00:35:52.080 "name": "Nvme1", 00:35:52.080 "trtype": "tcp", 00:35:52.080 "traddr": "10.0.0.2", 00:35:52.080 "adrfam": "ipv4", 00:35:52.080 "trsvcid": "4420", 00:35:52.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:52.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:52.080 "hdgst": false, 00:35:52.080 "ddgst": false 00:35:52.080 }, 00:35:52.080 "method": "bdev_nvme_attach_controller" 00:35:52.080 },{ 00:35:52.080 "params": { 00:35:52.080 "name": "Nvme2", 00:35:52.080 "trtype": "tcp", 00:35:52.080 "traddr": "10.0.0.2", 00:35:52.080 "adrfam": "ipv4", 00:35:52.080 "trsvcid": "4420", 00:35:52.080 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:52.080 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:52.080 "hdgst": false, 00:35:52.080 "ddgst": false 00:35:52.080 }, 00:35:52.080 "method": "bdev_nvme_attach_controller" 00:35:52.080 }' 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:52.080 20:41:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.080 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:52.080 ... 00:35:52.080 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:52.080 ... 00:35:52.080 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:52.080 ... 00:35:52.080 fio-3.35 00:35:52.080 Starting 24 threads 00:35:52.080 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.287 00:36:04.287 filename0: (groupid=0, jobs=1): err= 0: pid=29371: Mon Jul 15 20:41:41 2024 00:36:04.287 read: IOPS=86, BW=346KiB/s (354kB/s)(3504KiB/10126msec) 00:36:04.287 slat (usec): min=4, max=186, avg=26.20, stdev=23.96 00:36:04.287 clat (msec): min=4, max=327, avg=184.54, stdev=63.27 00:36:04.287 lat (msec): min=4, max=327, avg=184.57, stdev=63.28 00:36:04.287 clat percentiles (msec): 00:36:04.287 | 1.00th=[ 5], 5.00th=[ 57], 10.00th=[ 109], 20.00th=[ 142], 00:36:04.287 | 30.00th=[ 165], 40.00th=[ 171], 50.00th=[ 190], 60.00th=[ 199], 00:36:04.287 | 70.00th=[ 209], 80.00th=[ 239], 90.00th=[ 271], 95.00th=[ 284], 00:36:04.287 | 99.00th=[ 288], 99.50th=[ 292], 99.90th=[ 330], 99.95th=[ 330], 00:36:04.287 | 99.99th=[ 330] 00:36:04.287 bw ( KiB/s): min= 240, max= 768, per=5.24%, avg=344.00, stdev=122.57, samples=20 00:36:04.287 iops : min= 60, max= 192, avg=86.00, stdev=30.64, samples=20 00:36:04.287 lat (msec) : 10=3.42%, 20=0.23%, 50=0.23%, 100=3.42%, 250=76.83% 00:36:04.287 lat (msec) : 500=15.87% 00:36:04.287 cpu : usr=97.61%, sys=1.63%, ctx=30, majf=0, minf=57 00:36:04.287 IO depths : 1=1.9%, 2=7.2%, 4=21.7%, 8=58.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:36:04.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.287 complete : 0=0.0%, 4=93.3%, 8=1.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.287 issued rwts: total=876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.287 filename0: (groupid=0, jobs=1): err= 0: pid=29372: Mon Jul 15 20:41:41 2024 00:36:04.287 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10075msec) 00:36:04.287 slat (usec): min=7, max=312, avg=59.62, stdev=38.28 00:36:04.287 clat (msec): min=152, max=374, avg=251.34, stdev=37.50 00:36:04.287 lat (msec): min=152, max=374, avg=251.39, stdev=37.51 00:36:04.287 clat percentiles (msec): 00:36:04.287 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 224], 00:36:04.287 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 266], 00:36:04.287 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 292], 95.00th=[ 300], 00:36:04.287 | 99.00th=[ 317], 99.50th=[ 363], 99.90th=[ 376], 99.95th=[ 376], 00:36:04.287 | 99.99th=[ 376] 00:36:04.287 bw ( KiB/s): min= 128, max= 368, per=3.79%, avg=249.60, stdev=46.55, samples=20 00:36:04.287 iops : min= 32, max= 92, avg=62.40, stdev=11.64, samples=20 00:36:04.287 lat (msec) : 250=42.03%, 500=57.97% 00:36:04.287 cpu : usr=94.70%, sys=2.75%, ctx=212, majf=0, minf=35 00:36:04.287 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:04.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.287 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.287 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.287 filename0: (groupid=0, jobs=1): err= 0: pid=29373: Mon Jul 15 20:41:41 2024 00:36:04.287 read: IOPS=90, BW=364KiB/s (372kB/s)(3680KiB/10123msec) 00:36:04.287 slat (nsec): min=7328, max=50012, avg=11773.08, stdev=6024.22 00:36:04.287 clat (msec): min=100, max=299, avg=175.48, stdev=33.95 00:36:04.287 lat (msec): min=100, max=299, avg=175.49, stdev=33.95 00:36:04.287 clat percentiles (msec): 00:36:04.287 | 1.00th=[ 104], 5.00th=[ 110], 10.00th=[ 123], 20.00th=[ 150], 00:36:04.287 | 30.00th=[ 165], 40.00th=[ 171], 50.00th=[ 180], 60.00th=[ 184], 00:36:04.287 | 70.00th=[ 192], 80.00th=[ 199], 90.00th=[ 211], 95.00th=[ 220], 00:36:04.287 | 99.00th=[ 271], 99.50th=[ 279], 99.90th=[ 300], 99.95th=[ 300], 00:36:04.287 | 99.99th=[ 300] 00:36:04.287 bw ( KiB/s): min= 304, max= 512, per=5.50%, avg=361.60, stdev=54.05, samples=20 00:36:04.287 iops : min= 76, max= 128, avg=90.40, stdev=13.51, samples=20 00:36:04.287 lat (msec) : 250=97.17%, 500=2.83% 00:36:04.287 cpu : usr=98.03%, sys=1.60%, ctx=21, majf=0, minf=47 00:36:04.287 IO depths : 1=0.5%, 2=1.3%, 4=8.7%, 8=77.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:36:04.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.287 complete : 0=0.0%, 4=89.5%, 8=5.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.287 issued rwts: total=920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.287 filename0: (groupid=0, jobs=1): err= 0: pid=29374: Mon Jul 15 20:41:41 2024 00:36:04.287 read: IOPS=60, BW=242KiB/s (248kB/s)(2432KiB/10055msec) 00:36:04.287 slat (nsec): min=8256, max=69774, avg=22883.85, stdev=15050.71 00:36:04.287 clat (msec): min=170, max=346, avg=264.37, stdev=40.16 00:36:04.288 lat (msec): min=170, max=346, avg=264.39, stdev=40.15 00:36:04.288 clat percentiles (msec): 00:36:04.288 | 1.00th=[ 171], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 239], 00:36:04.288 | 30.00th=[ 247], 40.00th=[ 259], 50.00th=[ 264], 60.00th=[ 275], 00:36:04.288 | 70.00th=[ 284], 80.00th=[ 292], 90.00th=[ 313], 95.00th=[ 342], 00:36:04.288 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:36:04.288 | 99.99th=[ 347] 00:36:04.288 bw ( KiB/s): min= 128, max= 384, per=3.59%, avg=236.80, stdev=62.64, samples=20 00:36:04.288 iops : min= 32, max= 96, avg=59.20, stdev=15.66, samples=20 00:36:04.288 lat (msec) : 250=34.21%, 500=65.79% 00:36:04.288 cpu : usr=98.16%, sys=1.44%, ctx=31, majf=0, minf=26 00:36:04.288 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.288 filename0: (groupid=0, jobs=1): err= 0: pid=29375: Mon Jul 15 20:41:41 2024 00:36:04.288 read: IOPS=64, BW=259KiB/s (266kB/s)(2624KiB/10118msec) 00:36:04.288 slat (usec): min=4, max=107, avg=47.93, stdev=24.11 00:36:04.288 clat (msec): min=119, max=429, avg=245.80, stdev=56.02 00:36:04.288 lat (msec): min=119, max=429, avg=245.85, stdev=56.03 00:36:04.288 clat percentiles (msec): 00:36:04.288 | 1.00th=[ 120], 5.00th=[ 146], 10.00th=[ 176], 20.00th=[ 186], 00:36:04.288 | 30.00th=[ 220], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 266], 00:36:04.288 | 70.00th=[ 271], 80.00th=[ 292], 90.00th=[ 309], 95.00th=[ 330], 00:36:04.288 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 430], 99.95th=[ 430], 00:36:04.288 | 99.99th=[ 430] 00:36:04.288 bw ( KiB/s): min= 128, max= 384, per=3.90%, avg=256.00, stdev=70.61, samples=20 00:36:04.288 iops : min= 32, max= 96, avg=64.00, stdev=17.65, samples=20 00:36:04.288 lat (msec) : 250=45.58%, 500=54.42% 00:36:04.288 cpu : usr=97.14%, sys=1.86%, ctx=37, majf=0, minf=28 00:36:04.288 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:36:04.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.288 filename0: (groupid=0, jobs=1): err= 0: pid=29377: Mon Jul 15 20:41:41 2024 00:36:04.288 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10086msec) 00:36:04.288 slat (nsec): min=8848, max=91334, avg=32884.55, stdev=17546.42 00:36:04.288 clat (msec): min=110, max=381, avg=251.87, stdev=47.26 00:36:04.288 lat (msec): min=110, max=381, avg=251.91, stdev=47.26 00:36:04.288 clat percentiles (msec): 00:36:04.288 | 1.00th=[ 142], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 207], 00:36:04.288 | 30.00th=[ 234], 40.00th=[ 253], 50.00th=[ 264], 60.00th=[ 268], 00:36:04.288 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 309], 95.00th=[ 317], 00:36:04.288 | 99.00th=[ 372], 99.50th=[ 376], 99.90th=[ 380], 99.95th=[ 380], 00:36:04.288 | 99.99th=[ 380] 00:36:04.288 bw ( KiB/s): min= 128, max= 368, per=3.79%, avg=249.60, stdev=46.55, samples=20 00:36:04.288 iops : min= 32, max= 92, avg=62.40, stdev=11.64, samples=20 00:36:04.288 lat (msec) : 250=37.50%, 500=62.50% 00:36:04.288 cpu : usr=98.22%, sys=1.40%, ctx=60, majf=0, minf=24 00:36:04.288 IO depths : 1=3.0%, 2=9.2%, 4=25.0%, 8=53.3%, 16=9.5%, 32=0.0%, >=64=0.0% 00:36:04.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.288 filename0: (groupid=0, jobs=1): err= 0: pid=29378: Mon Jul 15 20:41:41 2024 00:36:04.288 read: IOPS=61, BW=248KiB/s (254kB/s)(2496KiB/10082msec) 00:36:04.288 slat (usec): min=8, max=153, avg=31.24, stdev=20.95 00:36:04.288 clat (msec): min=111, max=447, avg=258.22, stdev=54.34 00:36:04.288 lat (msec): min=111, max=447, avg=258.26, stdev=54.33 00:36:04.288 clat percentiles (msec): 00:36:04.288 | 1.00th=[ 125], 5.00th=[ 159], 10.00th=[ 182], 20.00th=[ 222], 00:36:04.288 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 266], 60.00th=[ 271], 00:36:04.288 | 70.00th=[ 275], 80.00th=[ 292], 90.00th=[ 326], 95.00th=[ 334], 00:36:04.288 | 99.00th=[ 397], 99.50th=[ 422], 99.90th=[ 447], 99.95th=[ 447], 00:36:04.288 | 99.99th=[ 447] 00:36:04.288 bw ( KiB/s): min= 128, max= 368, per=3.70%, avg=243.20, stdev=51.81, samples=20 00:36:04.288 iops : min= 32, max= 92, avg=60.80, stdev=12.95, samples=20 00:36:04.288 lat (msec) : 250=30.77%, 500=69.23% 00:36:04.288 cpu : usr=97.76%, sys=1.63%, ctx=56, majf=0, minf=25 00:36:04.288 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:36:04.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.288 filename0: (groupid=0, jobs=1): err= 0: pid=29379: Mon Jul 15 20:41:41 2024 00:36:04.288 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10090msec) 00:36:04.288 slat (usec): min=7, max=175, avg=65.67, stdev=20.92 00:36:04.288 clat (msec): min=123, max=416, avg=258.18, stdev=44.81 00:36:04.288 lat (msec): min=123, max=416, avg=258.24, stdev=44.82 00:36:04.288 clat percentiles (msec): 00:36:04.288 | 1.00th=[ 157], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 228], 00:36:04.288 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 264], 60.00th=[ 271], 00:36:04.288 | 70.00th=[ 275], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 338], 00:36:04.288 | 99.00th=[ 380], 99.50th=[ 401], 99.90th=[ 418], 99.95th=[ 418], 00:36:04.288 | 99.99th=[ 418] 00:36:04.288 bw ( KiB/s): min= 128, max= 384, per=3.70%, avg=243.20, stdev=55.57, samples=20 00:36:04.288 iops : min= 32, max= 96, avg=60.80, stdev=13.89, samples=20 00:36:04.288 lat (msec) : 250=38.46%, 500=61.54% 00:36:04.288 cpu : usr=97.13%, sys=1.86%, ctx=58, majf=0, minf=24 00:36:04.288 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:36:04.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.288 filename1: (groupid=0, jobs=1): err= 0: pid=29380: Mon Jul 15 20:41:41 2024 00:36:04.288 read: IOPS=61, BW=248KiB/s (254kB/s)(2496KiB/10080msec) 00:36:04.288 slat (nsec): min=8350, max=92186, avg=29795.27, stdev=18503.49 00:36:04.288 clat (msec): min=142, max=398, avg=258.18, stdev=40.43 00:36:04.288 lat (msec): min=142, max=398, avg=258.21, stdev=40.42 00:36:04.288 clat percentiles (msec): 00:36:04.288 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 234], 00:36:04.288 | 30.00th=[ 251], 40.00th=[ 257], 50.00th=[ 266], 60.00th=[ 271], 00:36:04.288 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 296], 95.00th=[ 317], 00:36:04.288 | 99.00th=[ 338], 99.50th=[ 380], 99.90th=[ 397], 99.95th=[ 397], 00:36:04.288 | 99.99th=[ 397] 00:36:04.288 bw ( KiB/s): min= 128, max= 368, per=3.70%, avg=243.20, stdev=54.10, samples=20 00:36:04.288 iops : min= 32, max= 92, avg=60.80, stdev=13.52, samples=20 00:36:04.288 lat (msec) : 250=29.17%, 500=70.83% 00:36:04.288 cpu : usr=96.51%, sys=2.08%, ctx=86, majf=0, minf=30 00:36:04.288 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:04.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.288 filename1: (groupid=0, jobs=1): err= 0: pid=29381: Mon Jul 15 20:41:41 2024 00:36:04.288 read: IOPS=90, BW=360KiB/s (369kB/s)(3648KiB/10123msec) 00:36:04.288 slat (nsec): min=4739, max=85437, avg=16753.56, stdev=14499.95 00:36:04.288 clat (msec): min=58, max=241, avg=177.04, stdev=32.85 00:36:04.288 lat (msec): min=58, max=241, avg=177.05, stdev=32.85 00:36:04.288 clat percentiles (msec): 00:36:04.288 | 1.00th=[ 59], 5.00th=[ 131], 10.00th=[ 142], 20.00th=[ 155], 00:36:04.288 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 188], 00:36:04.288 | 70.00th=[ 194], 80.00th=[ 199], 90.00th=[ 213], 95.00th=[ 224], 00:36:04.288 | 99.00th=[ 241], 99.50th=[ 243], 99.90th=[ 243], 99.95th=[ 243], 00:36:04.288 | 99.99th=[ 243] 00:36:04.288 bw ( KiB/s): min= 256, max= 512, per=5.45%, avg=358.40, stdev=61.07, samples=20 00:36:04.288 iops : min= 64, max= 128, avg=89.60, stdev=15.27, samples=20 00:36:04.288 lat (msec) : 100=3.51%, 250=96.49% 00:36:04.288 cpu : usr=97.74%, sys=1.62%, ctx=42, majf=0, minf=32 00:36:04.288 IO depths : 1=0.8%, 2=7.0%, 4=25.0%, 8=55.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:36:04.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.288 filename1: (groupid=0, jobs=1): err= 0: pid=29382: Mon Jul 15 20:41:41 2024 00:36:04.288 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10087msec) 00:36:04.288 slat (usec): min=8, max=105, avg=29.74, stdev=15.69 00:36:04.288 clat (msec): min=155, max=376, avg=258.35, stdev=40.18 00:36:04.288 lat (msec): min=155, max=376, avg=258.38, stdev=40.18 00:36:04.288 clat percentiles (msec): 00:36:04.288 | 1.00th=[ 178], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 230], 00:36:04.288 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 266], 60.00th=[ 271], 00:36:04.288 | 70.00th=[ 279], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 317], 00:36:04.288 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 376], 99.95th=[ 376], 00:36:04.288 | 99.99th=[ 376] 00:36:04.288 bw ( KiB/s): min= 128, max= 384, per=3.70%, avg=243.20, stdev=55.57, samples=20 00:36:04.288 iops : min= 32, max= 96, avg=60.80, stdev=13.89, samples=20 00:36:04.288 lat (msec) : 250=34.62%, 500=65.38% 00:36:04.288 cpu : usr=97.32%, sys=1.69%, ctx=53, majf=0, minf=33 00:36:04.288 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:04.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.288 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.288 filename1: (groupid=0, jobs=1): err= 0: pid=29383: Mon Jul 15 20:41:41 2024 00:36:04.288 read: IOPS=68, BW=273KiB/s (280kB/s)(2752KiB/10075msec) 00:36:04.288 slat (nsec): min=4863, max=98220, avg=25700.88, stdev=19051.00 00:36:04.288 clat (msec): min=139, max=370, avg=234.08, stdev=41.73 00:36:04.288 lat (msec): min=139, max=370, avg=234.11, stdev=41.73 00:36:04.288 clat percentiles (msec): 00:36:04.289 | 1.00th=[ 140], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 188], 00:36:04.289 | 30.00th=[ 207], 40.00th=[ 226], 50.00th=[ 239], 60.00th=[ 251], 00:36:04.289 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 292], 00:36:04.289 | 99.00th=[ 309], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 372], 00:36:04.289 | 99.99th=[ 372] 00:36:04.289 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=268.80, stdev=36.93, samples=20 00:36:04.289 iops : min= 64, max= 96, avg=67.20, stdev= 9.23, samples=20 00:36:04.289 lat (msec) : 250=61.05%, 500=38.95% 00:36:04.289 cpu : usr=97.82%, sys=1.70%, ctx=42, majf=0, minf=36 00:36:04.289 IO depths : 1=3.6%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:36:04.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.289 filename1: (groupid=0, jobs=1): err= 0: pid=29385: Mon Jul 15 20:41:41 2024 00:36:04.289 read: IOPS=66, BW=266KiB/s (273kB/s)(2688KiB/10088msec) 00:36:04.289 slat (usec): min=9, max=105, avg=34.25, stdev=21.20 00:36:04.289 clat (msec): min=111, max=392, avg=239.68, stdev=44.49 00:36:04.289 lat (msec): min=111, max=392, avg=239.71, stdev=44.49 00:36:04.289 clat percentiles (msec): 00:36:04.289 | 1.00th=[ 142], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 186], 00:36:04.289 | 30.00th=[ 220], 40.00th=[ 234], 50.00th=[ 251], 60.00th=[ 255], 00:36:04.289 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 296], 00:36:04.289 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 393], 99.95th=[ 393], 00:36:04.289 | 99.99th=[ 393] 00:36:04.289 bw ( KiB/s): min= 128, max= 384, per=3.99%, avg=262.40, stdev=50.44, samples=20 00:36:04.289 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:36:04.289 lat (msec) : 250=50.60%, 500=49.40% 00:36:04.289 cpu : usr=97.82%, sys=1.48%, ctx=24, majf=0, minf=35 00:36:04.289 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:36:04.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.289 filename1: (groupid=0, jobs=1): err= 0: pid=29386: Mon Jul 15 20:41:41 2024 00:36:04.289 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10079msec) 00:36:04.289 slat (usec): min=8, max=102, avg=39.66, stdev=23.19 00:36:04.289 clat (msec): min=104, max=431, avg=264.31, stdev=50.87 00:36:04.289 lat (msec): min=104, max=431, avg=264.35, stdev=50.87 00:36:04.289 clat percentiles (msec): 00:36:04.289 | 1.00th=[ 148], 5.00th=[ 171], 10.00th=[ 190], 20.00th=[ 236], 00:36:04.289 | 30.00th=[ 245], 40.00th=[ 257], 50.00th=[ 266], 60.00th=[ 275], 00:36:04.289 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 338], 95.00th=[ 347], 00:36:04.289 | 99.00th=[ 393], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:36:04.289 | 99.99th=[ 430] 00:36:04.289 bw ( KiB/s): min= 128, max= 384, per=3.59%, avg=236.80, stdev=59.55, samples=20 00:36:04.289 iops : min= 32, max= 96, avg=59.20, stdev=14.89, samples=20 00:36:04.289 lat (msec) : 250=36.18%, 500=63.82% 00:36:04.289 cpu : usr=97.66%, sys=1.82%, ctx=37, majf=0, minf=36 00:36:04.289 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:04.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.289 filename1: (groupid=0, jobs=1): err= 0: pid=29387: Mon Jul 15 20:41:41 2024 00:36:04.289 read: IOPS=68, BW=273KiB/s (280kB/s)(2752KiB/10075msec) 00:36:04.289 slat (usec): min=11, max=339, avg=44.30, stdev=29.86 00:36:04.289 clat (msec): min=139, max=375, avg=233.96, stdev=41.73 00:36:04.289 lat (msec): min=140, max=375, avg=234.00, stdev=41.73 00:36:04.289 clat percentiles (msec): 00:36:04.289 | 1.00th=[ 153], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:36:04.289 | 30.00th=[ 205], 40.00th=[ 226], 50.00th=[ 243], 60.00th=[ 251], 00:36:04.289 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 292], 00:36:04.289 | 99.00th=[ 330], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 376], 00:36:04.289 | 99.99th=[ 376] 00:36:04.289 bw ( KiB/s): min= 240, max= 384, per=4.08%, avg=268.80, stdev=37.29, samples=20 00:36:04.289 iops : min= 60, max= 96, avg=67.20, stdev= 9.32, samples=20 00:36:04.289 lat (msec) : 250=62.65%, 500=37.35% 00:36:04.289 cpu : usr=97.52%, sys=1.52%, ctx=30, majf=0, minf=43 00:36:04.289 IO depths : 1=2.8%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.7%, 32=0.0%, >=64=0.0% 00:36:04.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.289 filename1: (groupid=0, jobs=1): err= 0: pid=29388: Mon Jul 15 20:41:41 2024 00:36:04.289 read: IOPS=65, BW=260KiB/s (267kB/s)(2624KiB/10075msec) 00:36:04.289 slat (usec): min=10, max=165, avg=58.15, stdev=22.88 00:36:04.289 clat (msec): min=110, max=427, avg=245.25, stdev=50.79 00:36:04.289 lat (msec): min=110, max=427, avg=245.31, stdev=50.80 00:36:04.289 clat percentiles (msec): 00:36:04.289 | 1.00th=[ 111], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 192], 00:36:04.289 | 30.00th=[ 224], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 266], 00:36:04.289 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 292], 95.00th=[ 326], 00:36:04.289 | 99.00th=[ 376], 99.50th=[ 409], 99.90th=[ 426], 99.95th=[ 426], 00:36:04.289 | 99.99th=[ 426] 00:36:04.289 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=256.00, stdev=55.18, samples=20 00:36:04.289 iops : min= 32, max= 96, avg=64.00, stdev=13.80, samples=20 00:36:04.289 lat (msec) : 250=47.87%, 500=52.13% 00:36:04.289 cpu : usr=96.98%, sys=1.96%, ctx=66, majf=0, minf=31 00:36:04.289 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:36:04.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.289 filename2: (groupid=0, jobs=1): err= 0: pid=29389: Mon Jul 15 20:41:41 2024 00:36:04.289 read: IOPS=69, BW=278KiB/s (285kB/s)(2816KiB/10120msec) 00:36:04.289 slat (nsec): min=9019, max=48846, avg=23269.08, stdev=7353.90 00:36:04.289 clat (msec): min=104, max=365, avg=229.25, stdev=49.82 00:36:04.289 lat (msec): min=104, max=365, avg=229.28, stdev=49.82 00:36:04.289 clat percentiles (msec): 00:36:04.289 | 1.00th=[ 116], 5.00th=[ 140], 10.00th=[ 159], 20.00th=[ 182], 00:36:04.289 | 30.00th=[ 203], 40.00th=[ 224], 50.00th=[ 239], 60.00th=[ 255], 00:36:04.289 | 70.00th=[ 266], 80.00th=[ 275], 90.00th=[ 284], 95.00th=[ 292], 00:36:04.289 | 99.00th=[ 317], 99.50th=[ 321], 99.90th=[ 368], 99.95th=[ 368], 00:36:04.289 | 99.99th=[ 368] 00:36:04.289 bw ( KiB/s): min= 256, max= 384, per=4.19%, avg=275.20, stdev=44.84, samples=20 00:36:04.289 iops : min= 64, max= 96, avg=68.80, stdev=11.21, samples=20 00:36:04.289 lat (msec) : 250=57.67%, 500=42.33% 00:36:04.289 cpu : usr=97.47%, sys=1.93%, ctx=16, majf=0, minf=34 00:36:04.289 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:04.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.289 filename2: (groupid=0, jobs=1): err= 0: pid=29390: Mon Jul 15 20:41:41 2024 00:36:04.289 read: IOPS=65, BW=260KiB/s (266kB/s)(2624KiB/10083msec) 00:36:04.289 slat (usec): min=6, max=151, avg=27.06, stdev=14.02 00:36:04.289 clat (msec): min=136, max=338, avg=245.69, stdev=47.51 00:36:04.289 lat (msec): min=136, max=338, avg=245.71, stdev=47.51 00:36:04.289 clat percentiles (msec): 00:36:04.289 | 1.00th=[ 138], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 188], 00:36:04.289 | 30.00th=[ 228], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 268], 00:36:04.289 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 300], 95.00th=[ 309], 00:36:04.289 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:36:04.289 | 99.99th=[ 338] 00:36:04.289 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=256.00, stdev=41.85, samples=20 00:36:04.289 iops : min= 32, max= 96, avg=64.00, stdev=10.46, samples=20 00:36:04.289 lat (msec) : 250=43.60%, 500=56.40% 00:36:04.289 cpu : usr=97.73%, sys=1.72%, ctx=34, majf=0, minf=25 00:36:04.289 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:04.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.289 filename2: (groupid=0, jobs=1): err= 0: pid=29391: Mon Jul 15 20:41:41 2024 00:36:04.289 read: IOPS=71, BW=286KiB/s (293kB/s)(2880KiB/10075msec) 00:36:04.289 slat (nsec): min=8519, max=99139, avg=27244.87, stdev=19636.61 00:36:04.289 clat (msec): min=103, max=295, avg=223.63, stdev=49.80 00:36:04.289 lat (msec): min=103, max=295, avg=223.66, stdev=49.80 00:36:04.289 clat percentiles (msec): 00:36:04.289 | 1.00th=[ 105], 5.00th=[ 130], 10.00th=[ 146], 20.00th=[ 184], 00:36:04.289 | 30.00th=[ 197], 40.00th=[ 218], 50.00th=[ 234], 60.00th=[ 249], 00:36:04.289 | 70.00th=[ 259], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 292], 00:36:04.289 | 99.00th=[ 296], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:36:04.289 | 99.99th=[ 296] 00:36:04.289 bw ( KiB/s): min= 256, max= 384, per=4.28%, avg=281.60, stdev=52.53, samples=20 00:36:04.289 iops : min= 64, max= 96, avg=70.40, stdev=13.13, samples=20 00:36:04.289 lat (msec) : 250=62.50%, 500=37.50% 00:36:04.289 cpu : usr=97.87%, sys=1.69%, ctx=40, majf=0, minf=43 00:36:04.289 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:04.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.289 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.289 filename2: (groupid=0, jobs=1): err= 0: pid=29393: Mon Jul 15 20:41:41 2024 00:36:04.289 read: IOPS=61, BW=248KiB/s (254kB/s)(2496KiB/10081msec) 00:36:04.289 slat (usec): min=8, max=191, avg=55.28, stdev=27.06 00:36:04.289 clat (msec): min=124, max=334, avg=257.99, stdev=43.17 00:36:04.289 lat (msec): min=124, max=334, avg=258.04, stdev=43.17 00:36:04.289 clat percentiles (msec): 00:36:04.289 | 1.00th=[ 125], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 234], 00:36:04.290 | 30.00th=[ 251], 40.00th=[ 257], 50.00th=[ 266], 60.00th=[ 271], 00:36:04.290 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 309], 95.00th=[ 334], 00:36:04.290 | 99.00th=[ 334], 99.50th=[ 334], 99.90th=[ 334], 99.95th=[ 334], 00:36:04.290 | 99.99th=[ 334] 00:36:04.290 bw ( KiB/s): min= 128, max= 384, per=3.70%, avg=243.20, stdev=57.24, samples=20 00:36:04.290 iops : min= 32, max= 96, avg=60.80, stdev=14.31, samples=20 00:36:04.290 lat (msec) : 250=27.24%, 500=72.76% 00:36:04.290 cpu : usr=97.48%, sys=1.69%, ctx=40, majf=0, minf=36 00:36:04.290 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.290 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.290 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.290 filename2: (groupid=0, jobs=1): err= 0: pid=29394: Mon Jul 15 20:41:41 2024 00:36:04.290 read: IOPS=86, BW=348KiB/s (356kB/s)(3520KiB/10123msec) 00:36:04.290 slat (nsec): min=7157, max=85857, avg=19791.19, stdev=18655.10 00:36:04.290 clat (msec): min=58, max=285, avg=183.44, stdev=37.17 00:36:04.290 lat (msec): min=58, max=285, avg=183.45, stdev=37.18 00:36:04.290 clat percentiles (msec): 00:36:04.290 | 1.00th=[ 59], 5.00th=[ 131], 10.00th=[ 142], 20.00th=[ 161], 00:36:04.290 | 30.00th=[ 171], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 190], 00:36:04.290 | 70.00th=[ 199], 80.00th=[ 207], 90.00th=[ 230], 95.00th=[ 243], 00:36:04.290 | 99.00th=[ 262], 99.50th=[ 264], 99.90th=[ 284], 99.95th=[ 284], 00:36:04.290 | 99.99th=[ 284] 00:36:04.290 bw ( KiB/s): min= 256, max= 512, per=5.25%, avg=345.60, stdev=71.82, samples=20 00:36:04.290 iops : min= 64, max= 128, avg=86.40, stdev=17.95, samples=20 00:36:04.290 lat (msec) : 100=3.64%, 250=94.09%, 500=2.27% 00:36:04.290 cpu : usr=97.92%, sys=1.69%, ctx=26, majf=0, minf=38 00:36:04.290 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:04.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.290 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.290 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.290 filename2: (groupid=0, jobs=1): err= 0: pid=29395: Mon Jul 15 20:41:41 2024 00:36:04.290 read: IOPS=68, BW=272KiB/s (279kB/s)(2752KiB/10101msec) 00:36:04.290 slat (usec): min=11, max=226, avg=47.48, stdev=27.79 00:36:04.290 clat (msec): min=114, max=380, avg=234.02, stdev=41.09 00:36:04.290 lat (msec): min=114, max=380, avg=234.06, stdev=41.09 00:36:04.290 clat percentiles (msec): 00:36:04.290 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 188], 00:36:04.290 | 30.00th=[ 199], 40.00th=[ 230], 50.00th=[ 239], 60.00th=[ 247], 00:36:04.290 | 70.00th=[ 264], 80.00th=[ 268], 90.00th=[ 284], 95.00th=[ 292], 00:36:04.290 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 380], 99.95th=[ 380], 00:36:04.290 | 99.99th=[ 380] 00:36:04.290 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=268.80, stdev=36.93, samples=20 00:36:04.290 iops : min= 64, max= 96, avg=67.20, stdev= 9.23, samples=20 00:36:04.290 lat (msec) : 250=62.50%, 500=37.50% 00:36:04.290 cpu : usr=96.04%, sys=2.26%, ctx=129, majf=0, minf=26 00:36:04.290 IO depths : 1=2.5%, 2=8.7%, 4=25.0%, 8=53.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:36:04.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.290 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.290 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.290 filename2: (groupid=0, jobs=1): err= 0: pid=29396: Mon Jul 15 20:41:41 2024 00:36:04.290 read: IOPS=65, BW=260KiB/s (266kB/s)(2624KiB/10085msec) 00:36:04.290 slat (usec): min=5, max=145, avg=25.52, stdev=15.18 00:36:04.290 clat (msec): min=141, max=396, avg=245.76, stdev=49.90 00:36:04.290 lat (msec): min=141, max=396, avg=245.78, stdev=49.90 00:36:04.290 clat percentiles (msec): 00:36:04.290 | 1.00th=[ 142], 5.00th=[ 155], 10.00th=[ 174], 20.00th=[ 188], 00:36:04.290 | 30.00th=[ 228], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 268], 00:36:04.290 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 300], 95.00th=[ 334], 00:36:04.290 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 397], 99.95th=[ 397], 00:36:04.290 | 99.99th=[ 397] 00:36:04.290 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=256.00, stdev=55.18, samples=20 00:36:04.290 iops : min= 32, max= 96, avg=64.00, stdev=13.80, samples=20 00:36:04.290 lat (msec) : 250=46.49%, 500=53.51% 00:36:04.290 cpu : usr=96.39%, sys=2.27%, ctx=104, majf=0, minf=27 00:36:04.290 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:04.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.290 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.290 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.290 filename2: (groupid=0, jobs=1): err= 0: pid=29397: Mon Jul 15 20:41:41 2024 00:36:04.290 read: IOPS=63, BW=253KiB/s (260kB/s)(2560KiB/10101msec) 00:36:04.290 slat (usec): min=8, max=282, avg=61.34, stdev=33.37 00:36:04.290 clat (msec): min=123, max=446, avg=252.01, stdev=51.56 00:36:04.290 lat (msec): min=123, max=446, avg=252.07, stdev=51.57 00:36:04.290 clat percentiles (msec): 00:36:04.290 | 1.00th=[ 124], 5.00th=[ 171], 10.00th=[ 180], 20.00th=[ 197], 00:36:04.290 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 257], 60.00th=[ 268], 00:36:04.290 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 309], 95.00th=[ 334], 00:36:04.290 | 99.00th=[ 338], 99.50th=[ 426], 99.90th=[ 447], 99.95th=[ 447], 00:36:04.290 | 99.99th=[ 447] 00:36:04.290 bw ( KiB/s): min= 128, max= 384, per=3.79%, avg=249.60, stdev=50.97, samples=20 00:36:04.290 iops : min= 32, max= 96, avg=62.40, stdev=12.74, samples=20 00:36:04.290 lat (msec) : 250=35.94%, 500=64.06% 00:36:04.290 cpu : usr=96.52%, sys=2.06%, ctx=59, majf=0, minf=25 00:36:04.290 IO depths : 1=3.9%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:36:04.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.290 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.290 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.290 00:36:04.290 Run status group 0 (all jobs): 00:36:04.290 READ: bw=6568KiB/s (6726kB/s), 241KiB/s-364KiB/s (247kB/s-372kB/s), io=65.0MiB (68.1MB), run=10055-10126msec 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.290 bdev_null0 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.290 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.291 [2024-07-15 20:41:41.457086] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.291 bdev_null1 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:04.291 { 00:36:04.291 "params": { 00:36:04.291 "name": "Nvme$subsystem", 00:36:04.291 "trtype": "$TEST_TRANSPORT", 00:36:04.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.291 "adrfam": "ipv4", 00:36:04.291 "trsvcid": "$NVMF_PORT", 00:36:04.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.291 "hdgst": ${hdgst:-false}, 00:36:04.291 "ddgst": ${ddgst:-false} 00:36:04.291 }, 00:36:04.291 "method": "bdev_nvme_attach_controller" 00:36:04.291 } 00:36:04.291 EOF 00:36:04.291 )") 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:04.291 { 00:36:04.291 "params": { 00:36:04.291 "name": "Nvme$subsystem", 00:36:04.291 "trtype": "$TEST_TRANSPORT", 00:36:04.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.291 "adrfam": "ipv4", 00:36:04.291 "trsvcid": "$NVMF_PORT", 00:36:04.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.291 "hdgst": ${hdgst:-false}, 00:36:04.291 "ddgst": ${ddgst:-false} 00:36:04.291 }, 00:36:04.291 "method": "bdev_nvme_attach_controller" 00:36:04.291 } 00:36:04.291 EOF 00:36:04.291 )") 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:04.291 "params": { 00:36:04.291 "name": "Nvme0", 00:36:04.291 "trtype": "tcp", 00:36:04.291 "traddr": "10.0.0.2", 00:36:04.291 "adrfam": "ipv4", 00:36:04.291 "trsvcid": "4420", 00:36:04.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:04.291 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:04.291 "hdgst": false, 00:36:04.291 "ddgst": false 00:36:04.291 }, 00:36:04.291 "method": "bdev_nvme_attach_controller" 00:36:04.291 },{ 00:36:04.291 "params": { 00:36:04.291 "name": "Nvme1", 00:36:04.291 "trtype": "tcp", 00:36:04.291 "traddr": "10.0.0.2", 00:36:04.291 "adrfam": "ipv4", 00:36:04.291 "trsvcid": "4420", 00:36:04.291 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:04.291 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:04.291 "hdgst": false, 00:36:04.291 "ddgst": false 00:36:04.291 }, 00:36:04.291 "method": "bdev_nvme_attach_controller" 00:36:04.291 }' 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:04.291 20:41:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.291 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:04.291 ... 00:36:04.291 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:04.291 ... 00:36:04.291 fio-3.35 00:36:04.291 Starting 4 threads 00:36:04.291 EAL: No free 2048 kB hugepages reported on node 1 00:36:09.577 00:36:09.577 filename0: (groupid=0, jobs=1): err= 0: pid=30825: Mon Jul 15 20:41:47 2024 00:36:09.577 read: IOPS=1854, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5003msec) 00:36:09.577 slat (nsec): min=4052, max=45078, avg=10869.98, stdev=3427.14 00:36:09.577 clat (usec): min=2284, max=6979, avg=4281.97, stdev=804.83 00:36:09.577 lat (usec): min=2297, max=6987, avg=4292.84, stdev=804.89 00:36:09.577 clat percentiles (usec): 00:36:09.577 | 1.00th=[ 2999], 5.00th=[ 3294], 10.00th=[ 3392], 20.00th=[ 3752], 00:36:09.577 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4293], 00:36:09.577 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 5932], 95.00th=[ 6259], 00:36:09.577 | 99.00th=[ 6521], 99.50th=[ 6652], 99.90th=[ 6849], 99.95th=[ 6849], 00:36:09.577 | 99.99th=[ 6980] 00:36:09.577 bw ( KiB/s): min=13904, max=16958, per=25.81%, avg=14851.33, stdev=1063.89, samples=9 00:36:09.577 iops : min= 1738, max= 2119, avg=1856.33, stdev=132.80, samples=9 00:36:09.577 lat (msec) : 4=29.47%, 10=70.53% 00:36:09.577 cpu : usr=94.34%, sys=5.20%, ctx=5, majf=0, minf=122 00:36:09.577 IO depths : 1=0.1%, 2=2.5%, 4=69.0%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:09.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.577 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.577 issued rwts: total=9276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.577 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:09.577 filename0: (groupid=0, jobs=1): err= 0: pid=30826: Mon Jul 15 20:41:47 2024 00:36:09.577 read: IOPS=1854, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5003msec) 00:36:09.577 slat (nsec): min=3830, max=48196, avg=13064.68, stdev=4490.99 00:36:09.577 clat (usec): min=2127, max=7627, avg=4281.46, stdev=369.36 00:36:09.577 lat (usec): min=2135, max=7639, avg=4294.52, stdev=369.65 00:36:09.577 clat percentiles (usec): 00:36:09.577 | 1.00th=[ 3326], 5.00th=[ 3884], 10.00th=[ 3949], 20.00th=[ 4113], 00:36:09.577 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:09.577 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5014], 00:36:09.577 | 99.00th=[ 5538], 99.50th=[ 5997], 99.90th=[ 6718], 99.95th=[ 6915], 00:36:09.577 | 99.99th=[ 7635] 00:36:09.577 bw ( KiB/s): min=14336, max=15232, per=25.79%, avg=14837.33, stdev=370.51, samples=9 00:36:09.578 iops : min= 1792, max= 1904, avg=1854.67, stdev=46.31, samples=9 00:36:09.578 lat (msec) : 4=14.68%, 10=85.32% 00:36:09.578 cpu : usr=94.02%, sys=5.46%, ctx=8, majf=0, minf=61 00:36:09.578 IO depths : 1=0.1%, 2=0.6%, 4=64.0%, 8=35.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:09.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.578 complete : 0=0.0%, 4=98.5%, 8=1.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.578 issued rwts: total=9278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.578 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:09.578 filename1: (groupid=0, jobs=1): err= 0: pid=30827: Mon Jul 15 20:41:47 2024 00:36:09.578 read: IOPS=1728, BW=13.5MiB/s (14.2MB/s)(67.6MiB/5003msec) 00:36:09.578 slat (nsec): min=3881, max=46081, avg=11409.56, stdev=4043.62 00:36:09.578 clat (usec): min=2530, max=10097, avg=4593.06, stdev=826.22 00:36:09.578 lat (usec): min=2538, max=10113, avg=4604.47, stdev=825.33 00:36:09.578 clat percentiles (usec): 00:36:09.578 | 1.00th=[ 3589], 5.00th=[ 3916], 10.00th=[ 4015], 20.00th=[ 4080], 00:36:09.578 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:36:09.578 | 70.00th=[ 4424], 80.00th=[ 4883], 90.00th=[ 6259], 95.00th=[ 6456], 00:36:09.578 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 7373], 99.95th=[ 7635], 00:36:09.578 | 99.99th=[10159] 00:36:09.578 bw ( KiB/s): min=13365, max=14144, per=24.03%, avg=13824.56, stdev=239.06, samples=9 00:36:09.578 iops : min= 1670, max= 1768, avg=1728.00, stdev=30.03, samples=9 00:36:09.578 lat (msec) : 4=9.98%, 10=90.00%, 20=0.02% 00:36:09.578 cpu : usr=94.14%, sys=5.38%, ctx=7, majf=0, minf=85 00:36:09.578 IO depths : 1=0.1%, 2=0.4%, 4=72.3%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:09.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.578 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.578 issued rwts: total=8647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.578 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:09.578 filename1: (groupid=0, jobs=1): err= 0: pid=30828: Mon Jul 15 20:41:47 2024 00:36:09.578 read: IOPS=1754, BW=13.7MiB/s (14.4MB/s)(68.6MiB/5002msec) 00:36:09.578 slat (nsec): min=3814, max=54272, avg=12863.81, stdev=5005.22 00:36:09.578 clat (usec): min=2439, max=7801, avg=4520.88, stdev=778.08 00:36:09.578 lat (usec): min=2458, max=7813, avg=4533.75, stdev=778.19 00:36:09.578 clat percentiles (usec): 00:36:09.578 | 1.00th=[ 3392], 5.00th=[ 3916], 10.00th=[ 4015], 20.00th=[ 4080], 00:36:09.578 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:36:09.578 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 6063], 95.00th=[ 6390], 00:36:09.578 | 99.00th=[ 6849], 99.50th=[ 6915], 99.90th=[ 7504], 99.95th=[ 7767], 00:36:09.578 | 99.99th=[ 7832] 00:36:09.578 bw ( KiB/s): min=13376, max=14368, per=24.39%, avg=14032.00, stdev=319.50, samples=9 00:36:09.578 iops : min= 1672, max= 1796, avg=1754.00, stdev=39.94, samples=9 00:36:09.578 lat (msec) : 4=8.91%, 10=91.09% 00:36:09.578 cpu : usr=91.40%, sys=6.60%, ctx=234, majf=0, minf=66 00:36:09.578 IO depths : 1=0.1%, 2=0.4%, 4=72.5%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:09.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.578 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.578 issued rwts: total=8776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.578 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:09.578 00:36:09.578 Run status group 0 (all jobs): 00:36:09.578 READ: bw=56.2MiB/s (58.9MB/s), 13.5MiB/s-14.5MiB/s (14.2MB/s-15.2MB/s), io=281MiB (295MB), run=5002-5003msec 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.578 00:36:09.578 real 0m24.203s 00:36:09.578 user 4m32.595s 00:36:09.578 sys 0m7.633s 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:09.578 20:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:09.578 ************************************ 00:36:09.578 END TEST fio_dif_rand_params 00:36:09.578 ************************************ 00:36:09.578 20:41:47 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:09.578 20:41:47 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:09.578 20:41:47 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:09.578 20:41:47 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:09.578 20:41:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:09.578 ************************************ 00:36:09.578 START TEST fio_dif_digest 00:36:09.578 ************************************ 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:09.578 bdev_null0 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.578 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:09.579 [2024-07-15 20:41:47.893791] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:09.579 { 00:36:09.579 "params": { 00:36:09.579 "name": "Nvme$subsystem", 00:36:09.579 "trtype": "$TEST_TRANSPORT", 00:36:09.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:09.579 "adrfam": "ipv4", 00:36:09.579 "trsvcid": "$NVMF_PORT", 00:36:09.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:09.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:09.579 "hdgst": ${hdgst:-false}, 00:36:09.579 "ddgst": ${ddgst:-false} 00:36:09.579 }, 00:36:09.579 "method": "bdev_nvme_attach_controller" 00:36:09.579 } 00:36:09.579 EOF 00:36:09.579 )") 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:09.579 "params": { 00:36:09.579 "name": "Nvme0", 00:36:09.579 "trtype": "tcp", 00:36:09.579 "traddr": "10.0.0.2", 00:36:09.579 "adrfam": "ipv4", 00:36:09.579 "trsvcid": "4420", 00:36:09.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:09.579 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:09.579 "hdgst": true, 00:36:09.579 "ddgst": true 00:36:09.579 }, 00:36:09.579 "method": "bdev_nvme_attach_controller" 00:36:09.579 }' 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:09.579 20:41:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.837 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:09.837 ... 00:36:09.837 fio-3.35 00:36:09.837 Starting 3 threads 00:36:09.837 EAL: No free 2048 kB hugepages reported on node 1 00:36:22.058 00:36:22.058 filename0: (groupid=0, jobs=1): err= 0: pid=31578: Mon Jul 15 20:41:58 2024 00:36:22.058 read: IOPS=179, BW=22.4MiB/s (23.5MB/s)(225MiB/10048msec) 00:36:22.058 slat (nsec): min=4914, max=43998, avg=18660.92, stdev=4689.09 00:36:22.058 clat (usec): min=9911, max=59337, avg=16671.72, stdev=6097.80 00:36:22.058 lat (usec): min=9931, max=59358, avg=16690.38, stdev=6097.91 00:36:22.058 clat percentiles (usec): 00:36:22.058 | 1.00th=[11207], 5.00th=[13698], 10.00th=[14484], 20.00th=[15008], 00:36:22.058 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15795], 60.00th=[16188], 00:36:22.058 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17433], 95.00th=[17957], 00:36:22.058 | 99.00th=[55837], 99.50th=[56886], 99.90th=[57934], 99.95th=[59507], 00:36:22.058 | 99.99th=[59507] 00:36:22.058 bw ( KiB/s): min=19968, max=25088, per=31.81%, avg=23042.15, stdev=1326.77, samples=20 00:36:22.058 iops : min= 156, max= 196, avg=180.00, stdev=10.38, samples=20 00:36:22.058 lat (msec) : 10=0.06%, 20=97.50%, 50=0.22%, 100=2.22% 00:36:22.058 cpu : usr=93.61%, sys=5.57%, ctx=39, majf=0, minf=171 00:36:22.058 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.058 issued rwts: total=1803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.058 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:22.058 filename0: (groupid=0, jobs=1): err= 0: pid=31579: Mon Jul 15 20:41:58 2024 00:36:22.058 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(259MiB/10006msec) 00:36:22.058 slat (nsec): min=4807, max=38547, avg=16574.56, stdev=3974.75 00:36:22.058 clat (usec): min=5890, max=58370, avg=14485.96, stdev=3008.80 00:36:22.058 lat (usec): min=5904, max=58384, avg=14502.54, stdev=3009.16 00:36:22.058 clat percentiles (usec): 00:36:22.058 | 1.00th=[ 8291], 5.00th=[ 9765], 10.00th=[11994], 20.00th=[13435], 00:36:22.058 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14746], 60.00th=[15008], 00:36:22.058 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16319], 95.00th=[16712], 00:36:22.058 | 99.00th=[17695], 99.50th=[18482], 99.90th=[57410], 99.95th=[57410], 00:36:22.058 | 99.99th=[58459] 00:36:22.058 bw ( KiB/s): min=23040, max=28160, per=36.52%, avg=26457.60, stdev=1273.99, samples=20 00:36:22.058 iops : min= 180, max= 220, avg=206.70, stdev= 9.95, samples=20 00:36:22.058 lat (msec) : 10=6.19%, 20=93.48%, 50=0.05%, 100=0.29% 00:36:22.058 cpu : usr=94.34%, sys=5.12%, ctx=75, majf=0, minf=180 00:36:22.058 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.058 issued rwts: total=2069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.058 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:22.058 filename0: (groupid=0, jobs=1): err= 0: pid=31580: Mon Jul 15 20:41:58 2024 00:36:22.058 read: IOPS=180, BW=22.6MiB/s (23.7MB/s)(227MiB/10047msec) 00:36:22.058 slat (nsec): min=4874, max=52416, avg=18512.05, stdev=5786.24 00:36:22.058 clat (usec): min=9246, max=59274, avg=16559.66, stdev=4262.86 00:36:22.058 lat (usec): min=9260, max=59282, avg=16578.17, stdev=4263.13 00:36:22.058 clat percentiles (usec): 00:36:22.058 | 1.00th=[10552], 5.00th=[11994], 10.00th=[14222], 20.00th=[15139], 00:36:22.058 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16450], 60.00th=[16712], 00:36:22.058 | 70.00th=[17171], 80.00th=[17433], 90.00th=[18220], 95.00th=[19006], 00:36:22.058 | 99.00th=[20317], 99.50th=[56886], 99.90th=[58983], 99.95th=[59507], 00:36:22.058 | 99.99th=[59507] 00:36:22.058 bw ( KiB/s): min=20992, max=24832, per=32.03%, avg=23206.40, stdev=1028.29, samples=20 00:36:22.058 iops : min= 164, max= 194, avg=181.30, stdev= 8.03, samples=20 00:36:22.058 lat (msec) : 10=0.17%, 20=98.51%, 50=0.39%, 100=0.94% 00:36:22.058 cpu : usr=91.41%, sys=6.18%, ctx=629, majf=0, minf=214 00:36:22.058 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.058 issued rwts: total=1815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.058 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:22.058 00:36:22.058 Run status group 0 (all jobs): 00:36:22.058 READ: bw=70.7MiB/s (74.2MB/s), 22.4MiB/s-25.8MiB/s (23.5MB/s-27.1MB/s), io=711MiB (745MB), run=10006-10048msec 00:36:22.058 20:41:58 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:22.058 20:41:58 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:22.058 20:41:58 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:22.058 20:41:58 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:22.058 20:41:58 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:22.058 20:41:58 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:22.058 20:41:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.058 20:41:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.058 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.058 20:41:59 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:22.058 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.058 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.058 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.058 00:36:22.058 real 0m11.146s 00:36:22.058 user 0m29.178s 00:36:22.058 sys 0m1.971s 00:36:22.058 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:22.058 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.058 ************************************ 00:36:22.058 END TEST fio_dif_digest 00:36:22.058 ************************************ 00:36:22.058 20:41:59 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:22.058 20:41:59 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:22.058 20:41:59 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:22.058 20:41:59 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:22.058 20:41:59 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:22.058 20:41:59 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:22.058 20:41:59 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:22.058 20:41:59 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:22.058 20:41:59 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:22.058 rmmod nvme_tcp 00:36:22.058 rmmod nvme_fabrics 00:36:22.058 rmmod nvme_keyring 00:36:22.058 20:41:59 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:22.058 20:41:59 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:22.058 20:41:59 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:22.058 20:41:59 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 25539 ']' 00:36:22.059 20:41:59 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 25539 00:36:22.059 20:41:59 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 25539 ']' 00:36:22.059 20:41:59 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 25539 00:36:22.059 20:41:59 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:36:22.059 20:41:59 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:22.059 20:41:59 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 25539 00:36:22.059 20:41:59 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:22.059 20:41:59 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:22.059 20:41:59 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 25539' 00:36:22.059 killing process with pid 25539 00:36:22.059 20:41:59 nvmf_dif -- common/autotest_common.sh@967 -- # kill 25539 00:36:22.059 20:41:59 nvmf_dif -- common/autotest_common.sh@972 -- # wait 25539 00:36:22.059 20:41:59 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:22.059 20:41:59 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:22.059 Waiting for block devices as requested 00:36:22.059 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:22.059 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:22.315 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:22.315 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:22.315 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:22.572 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:22.572 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:22.572 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:22.572 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:22.572 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:22.830 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:22.830 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:22.830 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:23.087 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:23.087 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:23.087 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:23.087 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:23.345 20:42:01 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:23.345 20:42:01 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:23.345 20:42:01 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:23.345 20:42:01 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:23.345 20:42:01 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.345 20:42:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:23.345 20:42:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:25.248 20:42:03 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:25.248 00:36:25.248 real 1m6.516s 00:36:25.248 user 6m28.658s 00:36:25.248 sys 0m19.149s 00:36:25.248 20:42:03 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:25.248 20:42:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:25.248 ************************************ 00:36:25.248 END TEST nvmf_dif 00:36:25.248 ************************************ 00:36:25.248 20:42:03 -- common/autotest_common.sh@1142 -- # return 0 00:36:25.248 20:42:03 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:25.248 20:42:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:25.248 20:42:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:25.248 20:42:03 -- common/autotest_common.sh@10 -- # set +x 00:36:25.508 ************************************ 00:36:25.508 START TEST nvmf_abort_qd_sizes 00:36:25.508 ************************************ 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:25.508 * Looking for test storage... 00:36:25.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:25.508 20:42:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:27.412 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:27.412 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:27.412 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:27.412 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:27.412 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:27.670 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:27.670 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:27.670 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:27.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:27.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:36:27.670 00:36:27.670 --- 10.0.0.2 ping statistics --- 00:36:27.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.670 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:36:27.670 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:27.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:27.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:36:27.670 00:36:27.670 --- 10.0.0.1 ping statistics --- 00:36:27.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.670 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:36:27.670 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:27.670 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:27.670 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:27.670 20:42:05 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:28.604 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:28.604 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:28.604 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:28.604 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:28.604 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:28.861 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:28.861 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:28.861 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:28.861 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:28.861 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:28.861 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:28.861 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:28.861 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:28.861 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:28.861 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:28.861 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:29.797 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=36960 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 36960 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 36960 ']' 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:29.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:29.797 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.055 [2024-07-15 20:42:08.330841] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:36:30.055 [2024-07-15 20:42:08.330970] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:30.055 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.055 [2024-07-15 20:42:08.401376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:30.055 [2024-07-15 20:42:08.497893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:30.055 [2024-07-15 20:42:08.497945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:30.055 [2024-07-15 20:42:08.497970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:30.055 [2024-07-15 20:42:08.497982] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:30.055 [2024-07-15 20:42:08.497992] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:30.055 [2024-07-15 20:42:08.498080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.055 [2024-07-15 20:42:08.498114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:30.055 [2024-07-15 20:42:08.498240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:30.055 [2024-07-15 20:42:08.498242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:30.313 20:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.313 ************************************ 00:36:30.313 START TEST spdk_target_abort 00:36:30.313 ************************************ 00:36:30.313 20:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:36:30.313 20:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:30.313 20:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:30.313 20:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.313 20:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.590 spdk_targetn1 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.590 [2024-07-15 20:42:11.521691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.590 [2024-07-15 20:42:11.553994] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:33.590 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:33.591 20:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:33.591 EAL: No free 2048 kB hugepages reported on node 1 00:36:36.862 Initializing NVMe Controllers 00:36:36.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:36.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:36.862 Initialization complete. Launching workers. 00:36:36.862 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10572, failed: 0 00:36:36.862 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1297, failed to submit 9275 00:36:36.862 success 835, unsuccess 462, failed 0 00:36:36.862 20:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:36.862 20:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:36.862 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.176 Initializing NVMe Controllers 00:36:40.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:40.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:40.176 Initialization complete. Launching workers. 00:36:40.176 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8618, failed: 0 00:36:40.176 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1225, failed to submit 7393 00:36:40.176 success 322, unsuccess 903, failed 0 00:36:40.176 20:42:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:40.176 20:42:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:40.176 EAL: No free 2048 kB hugepages reported on node 1 00:36:43.455 Initializing NVMe Controllers 00:36:43.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:43.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:43.455 Initialization complete. Launching workers. 00:36:43.455 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31222, failed: 0 00:36:43.455 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2681, failed to submit 28541 00:36:43.455 success 550, unsuccess 2131, failed 0 00:36:43.455 20:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:43.455 20:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.455 20:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.455 20:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.455 20:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:43.455 20:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.455 20:42:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 36960 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 36960 ']' 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 36960 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 36960 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 36960' 00:36:44.390 killing process with pid 36960 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 36960 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 36960 00:36:44.390 00:36:44.390 real 0m14.230s 00:36:44.390 user 0m53.894s 00:36:44.390 sys 0m2.647s 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:44.390 20:42:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:44.390 ************************************ 00:36:44.390 END TEST spdk_target_abort 00:36:44.390 ************************************ 00:36:44.649 20:42:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:44.649 20:42:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:44.649 20:42:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:44.649 20:42:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:44.649 20:42:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:44.649 ************************************ 00:36:44.649 START TEST kernel_target_abort 00:36:44.649 ************************************ 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:44.649 20:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:45.585 Waiting for block devices as requested 00:36:45.585 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:45.843 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:45.843 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:46.101 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:46.101 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:46.101 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:46.101 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:46.359 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:46.359 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:46.359 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:46.359 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:46.359 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:46.617 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:46.617 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:46.617 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:46.874 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:46.874 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:46.874 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:46.874 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:46.874 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:46.874 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:46.874 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:46.875 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:46.875 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:46.875 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:46.875 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:47.132 No valid GPT data, bailing 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:47.132 00:36:47.132 Discovery Log Number of Records 2, Generation counter 2 00:36:47.132 =====Discovery Log Entry 0====== 00:36:47.132 trtype: tcp 00:36:47.132 adrfam: ipv4 00:36:47.132 subtype: current discovery subsystem 00:36:47.132 treq: not specified, sq flow control disable supported 00:36:47.132 portid: 1 00:36:47.132 trsvcid: 4420 00:36:47.132 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:47.132 traddr: 10.0.0.1 00:36:47.132 eflags: none 00:36:47.132 sectype: none 00:36:47.132 =====Discovery Log Entry 1====== 00:36:47.132 trtype: tcp 00:36:47.132 adrfam: ipv4 00:36:47.132 subtype: nvme subsystem 00:36:47.132 treq: not specified, sq flow control disable supported 00:36:47.132 portid: 1 00:36:47.132 trsvcid: 4420 00:36:47.132 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:47.132 traddr: 10.0.0.1 00:36:47.132 eflags: none 00:36:47.132 sectype: none 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.132 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:47.133 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.133 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:47.133 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:47.133 20:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:47.133 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.404 Initializing NVMe Controllers 00:36:50.404 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:50.404 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:50.404 Initialization complete. Launching workers. 00:36:50.404 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28456, failed: 0 00:36:50.404 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28456, failed to submit 0 00:36:50.404 success 0, unsuccess 28456, failed 0 00:36:50.404 20:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:50.404 20:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:50.404 EAL: No free 2048 kB hugepages reported on node 1 00:36:53.692 Initializing NVMe Controllers 00:36:53.692 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:53.692 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:53.692 Initialization complete. Launching workers. 00:36:53.692 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 58202, failed: 0 00:36:53.692 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14654, failed to submit 43548 00:36:53.692 success 0, unsuccess 14654, failed 0 00:36:53.692 20:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:53.692 20:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:53.692 EAL: No free 2048 kB hugepages reported on node 1 00:36:56.965 Initializing NVMe Controllers 00:36:56.965 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:56.965 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:56.965 Initialization complete. Launching workers. 00:36:56.965 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57297, failed: 0 00:36:56.966 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14294, failed to submit 43003 00:36:56.966 success 0, unsuccess 14294, failed 0 00:36:56.966 20:42:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:56.966 20:42:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:56.966 20:42:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:56.966 20:42:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:56.966 20:42:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:56.966 20:42:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:56.966 20:42:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:56.966 20:42:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:56.966 20:42:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:56.966 20:42:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:57.530 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:57.530 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:57.530 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:57.530 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:57.530 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:57.530 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:57.530 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:57.789 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:57.789 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:57.789 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:57.789 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:57.789 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:57.789 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:57.789 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:57.789 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:57.789 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:58.723 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:58.723 00:36:58.723 real 0m14.216s 00:36:58.723 user 0m4.705s 00:36:58.723 sys 0m3.396s 00:36:58.723 20:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:58.723 20:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.723 ************************************ 00:36:58.723 END TEST kernel_target_abort 00:36:58.723 ************************************ 00:36:58.723 20:42:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:58.723 20:42:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:58.723 20:42:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:58.723 20:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:58.723 20:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:58.723 20:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:58.723 20:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:58.723 20:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:58.723 20:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:58.723 rmmod nvme_tcp 00:36:58.723 rmmod nvme_fabrics 00:36:58.723 rmmod nvme_keyring 00:36:59.038 20:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:59.038 20:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:59.038 20:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:59.038 20:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 36960 ']' 00:36:59.038 20:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 36960 00:36:59.038 20:42:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 36960 ']' 00:36:59.038 20:42:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 36960 00:36:59.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (36960) - No such process 00:36:59.038 20:42:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 36960 is not found' 00:36:59.038 Process with pid 36960 is not found 00:36:59.038 20:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:59.038 20:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:59.972 Waiting for block devices as requested 00:36:59.972 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:37:00.232 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:00.232 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:00.232 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:00.232 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:00.490 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:00.490 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:00.490 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:00.490 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:00.749 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:00.749 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:00.749 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:01.007 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:01.007 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:01.007 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:01.007 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:01.007 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:01.266 20:42:39 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:01.266 20:42:39 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:01.266 20:42:39 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:01.266 20:42:39 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:01.266 20:42:39 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.266 20:42:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:01.266 20:42:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.170 20:42:41 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:03.170 00:37:03.170 real 0m37.888s 00:37:03.170 user 1m0.639s 00:37:03.170 sys 0m9.477s 00:37:03.170 20:42:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:03.170 20:42:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:03.170 ************************************ 00:37:03.170 END TEST nvmf_abort_qd_sizes 00:37:03.170 ************************************ 00:37:03.429 20:42:41 -- common/autotest_common.sh@1142 -- # return 0 00:37:03.429 20:42:41 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:03.429 20:42:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:03.429 20:42:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:03.429 20:42:41 -- common/autotest_common.sh@10 -- # set +x 00:37:03.429 ************************************ 00:37:03.429 START TEST keyring_file 00:37:03.429 ************************************ 00:37:03.429 20:42:41 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:03.429 * Looking for test storage... 00:37:03.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:03.429 20:42:41 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:03.429 20:42:41 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:03.429 20:42:41 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:03.429 20:42:41 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:03.429 20:42:41 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:03.429 20:42:41 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:03.429 20:42:41 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.430 20:42:41 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.430 20:42:41 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.430 20:42:41 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:03.430 20:42:41 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:03.430 20:42:41 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:03.430 20:42:41 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:03.430 20:42:41 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:03.430 20:42:41 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:03.430 20:42:41 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:03.430 20:42:41 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ypG07MZOnu 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ypG07MZOnu 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ypG07MZOnu 00:37:03.430 20:42:41 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ypG07MZOnu 00:37:03.430 20:42:41 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IN6zJiM8d0 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:03.430 20:42:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IN6zJiM8d0 00:37:03.430 20:42:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IN6zJiM8d0 00:37:03.430 20:42:41 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.IN6zJiM8d0 00:37:03.430 20:42:41 keyring_file -- keyring/file.sh@30 -- # tgtpid=42857 00:37:03.430 20:42:41 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:03.430 20:42:41 keyring_file -- keyring/file.sh@32 -- # waitforlisten 42857 00:37:03.430 20:42:41 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 42857 ']' 00:37:03.430 20:42:41 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:03.430 20:42:41 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:03.430 20:42:41 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:03.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:03.430 20:42:41 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:03.430 20:42:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:03.430 [2024-07-15 20:42:41.926300] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:37:03.430 [2024-07-15 20:42:41.926396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42857 ] 00:37:03.430 EAL: No free 2048 kB hugepages reported on node 1 00:37:03.688 [2024-07-15 20:42:41.985512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.688 [2024-07-15 20:42:42.074769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.946 20:42:42 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:03.946 20:42:42 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:03.946 20:42:42 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:03.946 20:42:42 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.946 20:42:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:03.946 [2024-07-15 20:42:42.326765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:03.946 null0 00:37:03.946 [2024-07-15 20:42:42.358817] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:03.946 [2024-07-15 20:42:42.359301] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:03.946 [2024-07-15 20:42:42.366826] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:03.946 20:42:42 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.946 20:42:42 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:03.946 20:42:42 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:03.946 20:42:42 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:03.947 [2024-07-15 20:42:42.378843] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:03.947 request: 00:37:03.947 { 00:37:03.947 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:03.947 "secure_channel": false, 00:37:03.947 "listen_address": { 00:37:03.947 "trtype": "tcp", 00:37:03.947 "traddr": "127.0.0.1", 00:37:03.947 "trsvcid": "4420" 00:37:03.947 }, 00:37:03.947 "method": "nvmf_subsystem_add_listener", 00:37:03.947 "req_id": 1 00:37:03.947 } 00:37:03.947 Got JSON-RPC error response 00:37:03.947 response: 00:37:03.947 { 00:37:03.947 "code": -32602, 00:37:03.947 "message": "Invalid parameters" 00:37:03.947 } 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:03.947 20:42:42 keyring_file -- keyring/file.sh@46 -- # bperfpid=42862 00:37:03.947 20:42:42 keyring_file -- keyring/file.sh@48 -- # waitforlisten 42862 /var/tmp/bperf.sock 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 42862 ']' 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:03.947 20:42:42 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:03.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:03.947 20:42:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:03.947 [2024-07-15 20:42:42.428396] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:37:03.947 [2024-07-15 20:42:42.428481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42862 ] 00:37:03.947 EAL: No free 2048 kB hugepages reported on node 1 00:37:04.205 [2024-07-15 20:42:42.486568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.205 [2024-07-15 20:42:42.572692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.205 20:42:42 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:04.205 20:42:42 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:04.205 20:42:42 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ypG07MZOnu 00:37:04.205 20:42:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ypG07MZOnu 00:37:04.462 20:42:42 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.IN6zJiM8d0 00:37:04.462 20:42:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.IN6zJiM8d0 00:37:04.719 20:42:43 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:04.719 20:42:43 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:04.719 20:42:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:04.719 20:42:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.719 20:42:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:04.976 20:42:43 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ypG07MZOnu == \/\t\m\p\/\t\m\p\.\y\p\G\0\7\M\Z\O\n\u ]] 00:37:04.976 20:42:43 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:04.976 20:42:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:04.976 20:42:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:04.976 20:42:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.976 20:42:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:05.233 20:42:43 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.IN6zJiM8d0 == \/\t\m\p\/\t\m\p\.\I\N\6\z\J\i\M\8\d\0 ]] 00:37:05.233 20:42:43 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:05.233 20:42:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:05.233 20:42:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:05.233 20:42:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:05.233 20:42:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.233 20:42:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:05.491 20:42:43 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:05.491 20:42:43 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:05.491 20:42:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:05.491 20:42:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:05.491 20:42:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:05.491 20:42:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.491 20:42:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:05.748 20:42:44 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:05.748 20:42:44 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:05.748 20:42:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:06.005 [2024-07-15 20:42:44.413342] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:06.005 nvme0n1 00:37:06.005 20:42:44 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:06.005 20:42:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:06.005 20:42:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:06.005 20:42:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:06.005 20:42:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.005 20:42:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:06.262 20:42:44 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:06.262 20:42:44 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:06.262 20:42:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:06.262 20:42:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:06.262 20:42:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:06.262 20:42:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.262 20:42:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:06.519 20:42:44 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:06.519 20:42:44 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:06.776 Running I/O for 1 seconds... 00:37:07.709 00:37:07.709 Latency(us) 00:37:07.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.709 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:07.709 nvme0n1 : 1.03 4167.75 16.28 0.00 0.00 30275.32 4126.34 37865.24 00:37:07.709 =================================================================================================================== 00:37:07.709 Total : 4167.75 16.28 0.00 0.00 30275.32 4126.34 37865.24 00:37:07.709 0 00:37:07.709 20:42:46 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:07.709 20:42:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:07.967 20:42:46 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:07.967 20:42:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:07.967 20:42:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:07.967 20:42:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:07.967 20:42:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.967 20:42:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:08.225 20:42:46 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:08.225 20:42:46 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:08.225 20:42:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:08.225 20:42:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:08.225 20:42:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.225 20:42:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.225 20:42:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:08.483 20:42:46 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:08.483 20:42:46 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:08.483 20:42:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:08.483 20:42:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:08.483 20:42:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:08.483 20:42:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:08.483 20:42:46 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:08.483 20:42:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:08.483 20:42:46 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:08.483 20:42:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:08.740 [2024-07-15 20:42:47.151282] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:08.740 [2024-07-15 20:42:47.151843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x763710 (107): Transport endpoint is not connected 00:37:08.740 [2024-07-15 20:42:47.152832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x763710 (9): Bad file descriptor 00:37:08.740 [2024-07-15 20:42:47.153830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:08.741 [2024-07-15 20:42:47.153855] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:08.741 [2024-07-15 20:42:47.153885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:08.741 request: 00:37:08.741 { 00:37:08.741 "name": "nvme0", 00:37:08.741 "trtype": "tcp", 00:37:08.741 "traddr": "127.0.0.1", 00:37:08.741 "adrfam": "ipv4", 00:37:08.741 "trsvcid": "4420", 00:37:08.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:08.741 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:08.741 "prchk_reftag": false, 00:37:08.741 "prchk_guard": false, 00:37:08.741 "hdgst": false, 00:37:08.741 "ddgst": false, 00:37:08.741 "psk": "key1", 00:37:08.741 "method": "bdev_nvme_attach_controller", 00:37:08.741 "req_id": 1 00:37:08.741 } 00:37:08.741 Got JSON-RPC error response 00:37:08.741 response: 00:37:08.741 { 00:37:08.741 "code": -5, 00:37:08.741 "message": "Input/output error" 00:37:08.741 } 00:37:08.741 20:42:47 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:08.741 20:42:47 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:08.741 20:42:47 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:08.741 20:42:47 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:08.741 20:42:47 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:08.741 20:42:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:08.741 20:42:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:08.741 20:42:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.741 20:42:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.741 20:42:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:08.998 20:42:47 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:08.998 20:42:47 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:08.998 20:42:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:08.998 20:42:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:08.998 20:42:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.998 20:42:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.998 20:42:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:09.256 20:42:47 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:09.256 20:42:47 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:09.256 20:42:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:09.514 20:42:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:09.514 20:42:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:09.771 20:42:48 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:09.771 20:42:48 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:09.771 20:42:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.029 20:42:48 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:10.029 20:42:48 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ypG07MZOnu 00:37:10.029 20:42:48 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ypG07MZOnu 00:37:10.029 20:42:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:10.029 20:42:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ypG07MZOnu 00:37:10.029 20:42:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:10.029 20:42:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:10.029 20:42:48 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:10.029 20:42:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:10.029 20:42:48 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ypG07MZOnu 00:37:10.029 20:42:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ypG07MZOnu 00:37:10.287 [2024-07-15 20:42:48.653728] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ypG07MZOnu': 0100660 00:37:10.287 [2024-07-15 20:42:48.653771] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:10.287 request: 00:37:10.287 { 00:37:10.287 "name": "key0", 00:37:10.287 "path": "/tmp/tmp.ypG07MZOnu", 00:37:10.287 "method": "keyring_file_add_key", 00:37:10.287 "req_id": 1 00:37:10.287 } 00:37:10.287 Got JSON-RPC error response 00:37:10.287 response: 00:37:10.287 { 00:37:10.287 "code": -1, 00:37:10.287 "message": "Operation not permitted" 00:37:10.287 } 00:37:10.287 20:42:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:10.287 20:42:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:10.287 20:42:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:10.287 20:42:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:10.287 20:42:48 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ypG07MZOnu 00:37:10.287 20:42:48 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ypG07MZOnu 00:37:10.287 20:42:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ypG07MZOnu 00:37:10.544 20:42:48 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ypG07MZOnu 00:37:10.544 20:42:48 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:10.544 20:42:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:10.544 20:42:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.544 20:42:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.544 20:42:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.544 20:42:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:10.801 20:42:49 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:10.801 20:42:49 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:10.801 20:42:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:10.801 20:42:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:10.801 20:42:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:10.801 20:42:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:10.801 20:42:49 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:10.801 20:42:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:10.801 20:42:49 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:10.801 20:42:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:11.059 [2024-07-15 20:42:49.395768] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ypG07MZOnu': No such file or directory 00:37:11.059 [2024-07-15 20:42:49.395807] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:11.059 [2024-07-15 20:42:49.395848] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:11.059 [2024-07-15 20:42:49.395861] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:11.059 [2024-07-15 20:42:49.395874] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:11.059 request: 00:37:11.059 { 00:37:11.059 "name": "nvme0", 00:37:11.059 "trtype": "tcp", 00:37:11.059 "traddr": "127.0.0.1", 00:37:11.059 "adrfam": "ipv4", 00:37:11.059 "trsvcid": "4420", 00:37:11.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:11.059 "prchk_reftag": false, 00:37:11.059 "prchk_guard": false, 00:37:11.059 "hdgst": false, 00:37:11.059 "ddgst": false, 00:37:11.059 "psk": "key0", 00:37:11.059 "method": "bdev_nvme_attach_controller", 00:37:11.059 "req_id": 1 00:37:11.059 } 00:37:11.059 Got JSON-RPC error response 00:37:11.059 response: 00:37:11.059 { 00:37:11.059 "code": -19, 00:37:11.059 "message": "No such device" 00:37:11.059 } 00:37:11.059 20:42:49 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:11.059 20:42:49 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:11.059 20:42:49 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:11.059 20:42:49 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:11.059 20:42:49 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:11.059 20:42:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:11.316 20:42:49 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:11.316 20:42:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:11.316 20:42:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:11.316 20:42:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:11.316 20:42:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:11.316 20:42:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:11.316 20:42:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lHT3e8MNLl 00:37:11.316 20:42:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:11.316 20:42:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:11.316 20:42:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:11.316 20:42:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:11.316 20:42:49 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:11.316 20:42:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:11.316 20:42:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:11.316 20:42:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lHT3e8MNLl 00:37:11.316 20:42:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lHT3e8MNLl 00:37:11.316 20:42:49 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.lHT3e8MNLl 00:37:11.316 20:42:49 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lHT3e8MNLl 00:37:11.316 20:42:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lHT3e8MNLl 00:37:11.574 20:42:49 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:11.574 20:42:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:11.832 nvme0n1 00:37:11.832 20:42:50 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:11.832 20:42:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:11.832 20:42:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:11.832 20:42:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:11.832 20:42:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.832 20:42:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:12.089 20:42:50 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:12.089 20:42:50 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:12.089 20:42:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:12.377 20:42:50 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:12.378 20:42:50 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:12.378 20:42:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.378 20:42:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.378 20:42:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:12.640 20:42:51 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:12.640 20:42:51 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:12.640 20:42:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:12.640 20:42:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:12.640 20:42:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.640 20:42:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.640 20:42:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:12.918 20:42:51 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:12.919 20:42:51 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:12.919 20:42:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:13.175 20:42:51 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:13.175 20:42:51 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:13.175 20:42:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.431 20:42:51 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:13.431 20:42:51 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lHT3e8MNLl 00:37:13.431 20:42:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lHT3e8MNLl 00:37:13.688 20:42:52 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.IN6zJiM8d0 00:37:13.688 20:42:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.IN6zJiM8d0 00:37:13.945 20:42:52 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:13.945 20:42:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:14.202 nvme0n1 00:37:14.202 20:42:52 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:14.202 20:42:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:14.460 20:42:52 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:14.460 "subsystems": [ 00:37:14.460 { 00:37:14.460 "subsystem": "keyring", 00:37:14.460 "config": [ 00:37:14.460 { 00:37:14.460 "method": "keyring_file_add_key", 00:37:14.460 "params": { 00:37:14.460 "name": "key0", 00:37:14.460 "path": "/tmp/tmp.lHT3e8MNLl" 00:37:14.460 } 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "method": "keyring_file_add_key", 00:37:14.460 "params": { 00:37:14.460 "name": "key1", 00:37:14.460 "path": "/tmp/tmp.IN6zJiM8d0" 00:37:14.460 } 00:37:14.460 } 00:37:14.460 ] 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "subsystem": "iobuf", 00:37:14.460 "config": [ 00:37:14.460 { 00:37:14.460 "method": "iobuf_set_options", 00:37:14.460 "params": { 00:37:14.460 "small_pool_count": 8192, 00:37:14.460 "large_pool_count": 1024, 00:37:14.460 "small_bufsize": 8192, 00:37:14.460 "large_bufsize": 135168 00:37:14.460 } 00:37:14.460 } 00:37:14.460 ] 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "subsystem": "sock", 00:37:14.460 "config": [ 00:37:14.460 { 00:37:14.460 "method": "sock_set_default_impl", 00:37:14.460 "params": { 00:37:14.460 "impl_name": "posix" 00:37:14.460 } 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "method": "sock_impl_set_options", 00:37:14.460 "params": { 00:37:14.460 "impl_name": "ssl", 00:37:14.460 "recv_buf_size": 4096, 00:37:14.460 "send_buf_size": 4096, 00:37:14.460 "enable_recv_pipe": true, 00:37:14.460 "enable_quickack": false, 00:37:14.460 "enable_placement_id": 0, 00:37:14.460 "enable_zerocopy_send_server": true, 00:37:14.460 "enable_zerocopy_send_client": false, 00:37:14.460 "zerocopy_threshold": 0, 00:37:14.460 "tls_version": 0, 00:37:14.460 "enable_ktls": false 00:37:14.460 } 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "method": "sock_impl_set_options", 00:37:14.460 "params": { 00:37:14.460 "impl_name": "posix", 00:37:14.460 "recv_buf_size": 2097152, 00:37:14.460 "send_buf_size": 2097152, 00:37:14.460 "enable_recv_pipe": true, 00:37:14.460 "enable_quickack": false, 00:37:14.460 "enable_placement_id": 0, 00:37:14.460 "enable_zerocopy_send_server": true, 00:37:14.460 "enable_zerocopy_send_client": false, 00:37:14.460 "zerocopy_threshold": 0, 00:37:14.460 "tls_version": 0, 00:37:14.460 "enable_ktls": false 00:37:14.460 } 00:37:14.460 } 00:37:14.460 ] 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "subsystem": "vmd", 00:37:14.460 "config": [] 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "subsystem": "accel", 00:37:14.460 "config": [ 00:37:14.460 { 00:37:14.460 "method": "accel_set_options", 00:37:14.460 "params": { 00:37:14.460 "small_cache_size": 128, 00:37:14.460 "large_cache_size": 16, 00:37:14.460 "task_count": 2048, 00:37:14.460 "sequence_count": 2048, 00:37:14.460 "buf_count": 2048 00:37:14.460 } 00:37:14.460 } 00:37:14.460 ] 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "subsystem": "bdev", 00:37:14.460 "config": [ 00:37:14.460 { 00:37:14.460 "method": "bdev_set_options", 00:37:14.460 "params": { 00:37:14.460 "bdev_io_pool_size": 65535, 00:37:14.460 "bdev_io_cache_size": 256, 00:37:14.460 "bdev_auto_examine": true, 00:37:14.460 "iobuf_small_cache_size": 128, 00:37:14.460 "iobuf_large_cache_size": 16 00:37:14.460 } 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "method": "bdev_raid_set_options", 00:37:14.460 "params": { 00:37:14.460 "process_window_size_kb": 1024 00:37:14.460 } 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "method": "bdev_iscsi_set_options", 00:37:14.460 "params": { 00:37:14.460 "timeout_sec": 30 00:37:14.460 } 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "method": "bdev_nvme_set_options", 00:37:14.460 "params": { 00:37:14.460 "action_on_timeout": "none", 00:37:14.460 "timeout_us": 0, 00:37:14.460 "timeout_admin_us": 0, 00:37:14.460 "keep_alive_timeout_ms": 10000, 00:37:14.460 "arbitration_burst": 0, 00:37:14.460 "low_priority_weight": 0, 00:37:14.460 "medium_priority_weight": 0, 00:37:14.460 "high_priority_weight": 0, 00:37:14.460 "nvme_adminq_poll_period_us": 10000, 00:37:14.460 "nvme_ioq_poll_period_us": 0, 00:37:14.460 "io_queue_requests": 512, 00:37:14.460 "delay_cmd_submit": true, 00:37:14.460 "transport_retry_count": 4, 00:37:14.460 "bdev_retry_count": 3, 00:37:14.460 "transport_ack_timeout": 0, 00:37:14.460 "ctrlr_loss_timeout_sec": 0, 00:37:14.460 "reconnect_delay_sec": 0, 00:37:14.460 "fast_io_fail_timeout_sec": 0, 00:37:14.460 "disable_auto_failback": false, 00:37:14.460 "generate_uuids": false, 00:37:14.460 "transport_tos": 0, 00:37:14.460 "nvme_error_stat": false, 00:37:14.460 "rdma_srq_size": 0, 00:37:14.460 "io_path_stat": false, 00:37:14.460 "allow_accel_sequence": false, 00:37:14.460 "rdma_max_cq_size": 0, 00:37:14.460 "rdma_cm_event_timeout_ms": 0, 00:37:14.460 "dhchap_digests": [ 00:37:14.460 "sha256", 00:37:14.460 "sha384", 00:37:14.460 "sha512" 00:37:14.460 ], 00:37:14.460 "dhchap_dhgroups": [ 00:37:14.460 "null", 00:37:14.460 "ffdhe2048", 00:37:14.460 "ffdhe3072", 00:37:14.460 "ffdhe4096", 00:37:14.460 "ffdhe6144", 00:37:14.460 "ffdhe8192" 00:37:14.460 ] 00:37:14.460 } 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "method": "bdev_nvme_attach_controller", 00:37:14.460 "params": { 00:37:14.460 "name": "nvme0", 00:37:14.460 "trtype": "TCP", 00:37:14.460 "adrfam": "IPv4", 00:37:14.460 "traddr": "127.0.0.1", 00:37:14.460 "trsvcid": "4420", 00:37:14.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:14.460 "prchk_reftag": false, 00:37:14.460 "prchk_guard": false, 00:37:14.460 "ctrlr_loss_timeout_sec": 0, 00:37:14.460 "reconnect_delay_sec": 0, 00:37:14.460 "fast_io_fail_timeout_sec": 0, 00:37:14.460 "psk": "key0", 00:37:14.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:14.460 "hdgst": false, 00:37:14.460 "ddgst": false 00:37:14.460 } 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "method": "bdev_nvme_set_hotplug", 00:37:14.460 "params": { 00:37:14.460 "period_us": 100000, 00:37:14.460 "enable": false 00:37:14.460 } 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "method": "bdev_wait_for_examine" 00:37:14.460 } 00:37:14.460 ] 00:37:14.460 }, 00:37:14.460 { 00:37:14.460 "subsystem": "nbd", 00:37:14.460 "config": [] 00:37:14.460 } 00:37:14.460 ] 00:37:14.460 }' 00:37:14.460 20:42:52 keyring_file -- keyring/file.sh@114 -- # killprocess 42862 00:37:14.460 20:42:52 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 42862 ']' 00:37:14.460 20:42:52 keyring_file -- common/autotest_common.sh@952 -- # kill -0 42862 00:37:14.460 20:42:52 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:14.460 20:42:52 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:14.460 20:42:52 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 42862 00:37:14.460 20:42:52 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:14.460 20:42:52 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:14.460 20:42:52 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 42862' 00:37:14.460 killing process with pid 42862 00:37:14.460 20:42:52 keyring_file -- common/autotest_common.sh@967 -- # kill 42862 00:37:14.460 Received shutdown signal, test time was about 1.000000 seconds 00:37:14.460 00:37:14.460 Latency(us) 00:37:14.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:14.460 =================================================================================================================== 00:37:14.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:14.461 20:42:52 keyring_file -- common/autotest_common.sh@972 -- # wait 42862 00:37:14.719 20:42:53 keyring_file -- keyring/file.sh@117 -- # bperfpid=44318 00:37:14.719 20:42:53 keyring_file -- keyring/file.sh@119 -- # waitforlisten 44318 /var/tmp/bperf.sock 00:37:14.719 20:42:53 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 44318 ']' 00:37:14.719 20:42:53 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:14.719 20:42:53 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:14.719 20:42:53 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:14.719 20:42:53 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:14.719 "subsystems": [ 00:37:14.719 { 00:37:14.719 "subsystem": "keyring", 00:37:14.719 "config": [ 00:37:14.719 { 00:37:14.719 "method": "keyring_file_add_key", 00:37:14.719 "params": { 00:37:14.719 "name": "key0", 00:37:14.719 "path": "/tmp/tmp.lHT3e8MNLl" 00:37:14.719 } 00:37:14.719 }, 00:37:14.719 { 00:37:14.719 "method": "keyring_file_add_key", 00:37:14.719 "params": { 00:37:14.719 "name": "key1", 00:37:14.719 "path": "/tmp/tmp.IN6zJiM8d0" 00:37:14.719 } 00:37:14.719 } 00:37:14.719 ] 00:37:14.719 }, 00:37:14.719 { 00:37:14.719 "subsystem": "iobuf", 00:37:14.719 "config": [ 00:37:14.719 { 00:37:14.719 "method": "iobuf_set_options", 00:37:14.719 "params": { 00:37:14.719 "small_pool_count": 8192, 00:37:14.719 "large_pool_count": 1024, 00:37:14.719 "small_bufsize": 8192, 00:37:14.719 "large_bufsize": 135168 00:37:14.719 } 00:37:14.719 } 00:37:14.719 ] 00:37:14.719 }, 00:37:14.719 { 00:37:14.719 "subsystem": "sock", 00:37:14.719 "config": [ 00:37:14.719 { 00:37:14.719 "method": "sock_set_default_impl", 00:37:14.719 "params": { 00:37:14.719 "impl_name": "posix" 00:37:14.719 } 00:37:14.719 }, 00:37:14.719 { 00:37:14.719 "method": "sock_impl_set_options", 00:37:14.719 "params": { 00:37:14.719 "impl_name": "ssl", 00:37:14.719 "recv_buf_size": 4096, 00:37:14.719 "send_buf_size": 4096, 00:37:14.719 "enable_recv_pipe": true, 00:37:14.719 "enable_quickack": false, 00:37:14.719 "enable_placement_id": 0, 00:37:14.719 "enable_zerocopy_send_server": true, 00:37:14.719 "enable_zerocopy_send_client": false, 00:37:14.719 "zerocopy_threshold": 0, 00:37:14.719 "tls_version": 0, 00:37:14.719 "enable_ktls": false 00:37:14.719 } 00:37:14.719 }, 00:37:14.719 { 00:37:14.719 "method": "sock_impl_set_options", 00:37:14.719 "params": { 00:37:14.719 "impl_name": "posix", 00:37:14.719 "recv_buf_size": 2097152, 00:37:14.719 "send_buf_size": 2097152, 00:37:14.719 "enable_recv_pipe": true, 00:37:14.719 "enable_quickack": false, 00:37:14.719 "enable_placement_id": 0, 00:37:14.719 "enable_zerocopy_send_server": true, 00:37:14.719 "enable_zerocopy_send_client": false, 00:37:14.719 "zerocopy_threshold": 0, 00:37:14.719 "tls_version": 0, 00:37:14.719 "enable_ktls": false 00:37:14.719 } 00:37:14.719 } 00:37:14.719 ] 00:37:14.719 }, 00:37:14.719 { 00:37:14.719 "subsystem": "vmd", 00:37:14.719 "config": [] 00:37:14.719 }, 00:37:14.719 { 00:37:14.719 "subsystem": "accel", 00:37:14.719 "config": [ 00:37:14.719 { 00:37:14.719 "method": "accel_set_options", 00:37:14.719 "params": { 00:37:14.719 "small_cache_size": 128, 00:37:14.719 "large_cache_size": 16, 00:37:14.719 "task_count": 2048, 00:37:14.719 "sequence_count": 2048, 00:37:14.719 "buf_count": 2048 00:37:14.719 } 00:37:14.719 } 00:37:14.719 ] 00:37:14.719 }, 00:37:14.719 { 00:37:14.719 "subsystem": "bdev", 00:37:14.719 "config": [ 00:37:14.719 { 00:37:14.719 "method": "bdev_set_options", 00:37:14.719 "params": { 00:37:14.719 "bdev_io_pool_size": 65535, 00:37:14.719 "bdev_io_cache_size": 256, 00:37:14.719 "bdev_auto_examine": true, 00:37:14.719 "iobuf_small_cache_size": 128, 00:37:14.719 "iobuf_large_cache_size": 16 00:37:14.719 } 00:37:14.719 }, 00:37:14.719 { 00:37:14.719 "method": "bdev_raid_set_options", 00:37:14.719 "params": { 00:37:14.719 "process_window_size_kb": 1024 00:37:14.719 } 00:37:14.719 }, 00:37:14.719 { 00:37:14.719 "method": "bdev_iscsi_set_options", 00:37:14.719 "params": { 00:37:14.719 "timeout_sec": 30 00:37:14.719 } 00:37:14.719 }, 00:37:14.719 { 00:37:14.719 "method": "bdev_nvme_set_options", 00:37:14.719 "params": { 00:37:14.719 "action_on_timeout": "none", 00:37:14.719 "timeout_us": 0, 00:37:14.719 "timeout_admin_us": 0, 00:37:14.719 "keep_alive_timeout_ms": 10000, 00:37:14.719 "arbitration_burst": 0, 00:37:14.719 "low_priority_weight": 0, 00:37:14.719 "medium_priority_weight": 0, 00:37:14.719 "high_priority_weight": 0, 00:37:14.719 "nvme_adminq_poll_period_us": 10000, 00:37:14.719 "nvme_ioq_poll_period_us": 0, 00:37:14.719 "io_queue_requests": 512, 00:37:14.719 "delay_cmd_submit": true, 00:37:14.719 "transport_retry_count": 4, 00:37:14.719 "bdev_retry_count": 3, 00:37:14.719 "transport_ack_timeout": 0, 00:37:14.719 "ctrlr_loss_timeout_sec": 0, 00:37:14.719 "reconnect_delay_sec": 0, 00:37:14.719 "fast_io_fail_timeout_sec": 0, 00:37:14.719 "disable_auto_failback": false, 00:37:14.720 "generate_uuids": false, 00:37:14.720 "transport_tos": 0, 00:37:14.720 "nvme_error_stat": false, 00:37:14.720 "rdma_srq_size": 0, 00:37:14.720 "io_path_stat": false, 00:37:14.720 "allow_accel_sequence": false, 00:37:14.720 20:42:53 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:14.720 "rdma_max_cq_size": 0, 00:37:14.720 "rdma_cm_event_timeout_ms": 0, 00:37:14.720 "dhchap_digests": [ 00:37:14.720 "sha256", 00:37:14.720 "sha384", 00:37:14.720 "sha512" 00:37:14.720 ], 00:37:14.720 "dhchap_dhgroups": [ 00:37:14.720 "null", 00:37:14.720 "ffdhe2048", 00:37:14.720 "ffdhe3072", 00:37:14.720 "ffdhe4096", 00:37:14.720 "ffdhe6144", 00:37:14.720 "ffdhe8192" 00:37:14.720 ] 00:37:14.720 } 00:37:14.720 }, 00:37:14.720 { 00:37:14.720 "method": "bdev_nvme_attach_controller", 00:37:14.720 "params": { 00:37:14.720 "name": "nvme0", 00:37:14.720 "trtype": "TCP", 00:37:14.720 "adrfam": "IPv4", 00:37:14.720 "traddr": "127.0.0.1", 00:37:14.720 "trsvcid": "4420", 00:37:14.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:14.720 "prchk_reftag": false, 00:37:14.720 "prchk_guard": false, 00:37:14.720 "ctrlr_loss_timeout_sec": 0, 00:37:14.720 "reconnect_delay_sec": 0, 00:37:14.720 "fast_io_fail_timeout_sec": 0, 00:37:14.720 "psk": "key0", 00:37:14.720 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:14.720 "hdgst": false, 00:37:14.720 "ddgst": false 00:37:14.720 } 00:37:14.720 }, 00:37:14.720 { 00:37:14.720 "method": "bdev_nvme_set_hotplug", 00:37:14.720 "params": { 00:37:14.720 "period_us": 100000, 00:37:14.720 "enable": false 00:37:14.720 } 00:37:14.720 }, 00:37:14.720 { 00:37:14.720 "method": "bdev_wait_for_examine" 00:37:14.720 } 00:37:14.720 ] 00:37:14.720 }, 00:37:14.720 { 00:37:14.720 "subsystem": "nbd", 00:37:14.720 "config": [] 00:37:14.720 } 00:37:14.720 ] 00:37:14.720 }' 00:37:14.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:14.720 20:42:53 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:14.720 20:42:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:14.720 [2024-07-15 20:42:53.207578] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:37:14.720 [2024-07-15 20:42:53.207676] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44318 ] 00:37:14.720 EAL: No free 2048 kB hugepages reported on node 1 00:37:14.977 [2024-07-15 20:42:53.267806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:14.977 [2024-07-15 20:42:53.354462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:15.234 [2024-07-15 20:42:53.540383] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:15.799 20:42:54 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:15.799 20:42:54 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:15.799 20:42:54 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:15.799 20:42:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.799 20:42:54 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:16.057 20:42:54 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:16.057 20:42:54 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:16.057 20:42:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:16.057 20:42:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.057 20:42:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.057 20:42:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.057 20:42:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:16.315 20:42:54 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:16.315 20:42:54 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:16.315 20:42:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:16.315 20:42:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.315 20:42:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.315 20:42:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.315 20:42:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:16.573 20:42:54 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:16.573 20:42:54 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:16.573 20:42:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:16.573 20:42:54 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:16.830 20:42:55 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:16.830 20:42:55 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:16.830 20:42:55 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.lHT3e8MNLl /tmp/tmp.IN6zJiM8d0 00:37:16.830 20:42:55 keyring_file -- keyring/file.sh@20 -- # killprocess 44318 00:37:16.830 20:42:55 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 44318 ']' 00:37:16.830 20:42:55 keyring_file -- common/autotest_common.sh@952 -- # kill -0 44318 00:37:16.830 20:42:55 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:16.830 20:42:55 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:16.830 20:42:55 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 44318 00:37:16.831 20:42:55 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:16.831 20:42:55 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:16.831 20:42:55 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 44318' 00:37:16.831 killing process with pid 44318 00:37:16.831 20:42:55 keyring_file -- common/autotest_common.sh@967 -- # kill 44318 00:37:16.831 Received shutdown signal, test time was about 1.000000 seconds 00:37:16.831 00:37:16.831 Latency(us) 00:37:16.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:16.831 =================================================================================================================== 00:37:16.831 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:16.831 20:42:55 keyring_file -- common/autotest_common.sh@972 -- # wait 44318 00:37:17.088 20:42:55 keyring_file -- keyring/file.sh@21 -- # killprocess 42857 00:37:17.088 20:42:55 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 42857 ']' 00:37:17.088 20:42:55 keyring_file -- common/autotest_common.sh@952 -- # kill -0 42857 00:37:17.088 20:42:55 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:17.088 20:42:55 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:17.088 20:42:55 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 42857 00:37:17.088 20:42:55 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:17.088 20:42:55 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:17.088 20:42:55 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 42857' 00:37:17.088 killing process with pid 42857 00:37:17.088 20:42:55 keyring_file -- common/autotest_common.sh@967 -- # kill 42857 00:37:17.088 [2024-07-15 20:42:55.413261] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:17.088 20:42:55 keyring_file -- common/autotest_common.sh@972 -- # wait 42857 00:37:17.346 00:37:17.346 real 0m14.057s 00:37:17.346 user 0m34.779s 00:37:17.346 sys 0m3.220s 00:37:17.346 20:42:55 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:17.346 20:42:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:17.346 ************************************ 00:37:17.346 END TEST keyring_file 00:37:17.346 ************************************ 00:37:17.346 20:42:55 -- common/autotest_common.sh@1142 -- # return 0 00:37:17.346 20:42:55 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:17.346 20:42:55 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:17.346 20:42:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:17.346 20:42:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:17.346 20:42:55 -- common/autotest_common.sh@10 -- # set +x 00:37:17.346 ************************************ 00:37:17.346 START TEST keyring_linux 00:37:17.346 ************************************ 00:37:17.346 20:42:55 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:17.346 * Looking for test storage... 00:37:17.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:17.346 20:42:55 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:17.346 20:42:55 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:17.346 20:42:55 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:17.346 20:42:55 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:17.346 20:42:55 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:17.346 20:42:55 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:17.346 20:42:55 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:17.346 20:42:55 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:17.346 20:42:55 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:17.346 20:42:55 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:17.346 20:42:55 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:17.346 20:42:55 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:17.346 20:42:55 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:17.641 20:42:55 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:17.642 20:42:55 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:17.642 20:42:55 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:17.642 20:42:55 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:17.642 20:42:55 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.642 20:42:55 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.642 20:42:55 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.642 20:42:55 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:17.642 20:42:55 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:17.642 20:42:55 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:17.642 20:42:55 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:17.642 20:42:55 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:17.642 20:42:55 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:17.642 20:42:55 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:17.642 20:42:55 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:17.642 /tmp/:spdk-test:key0 00:37:17.642 20:42:55 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:17.642 20:42:55 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:17.642 20:42:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:17.642 /tmp/:spdk-test:key1 00:37:17.642 20:42:55 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=44681 00:37:17.642 20:42:55 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:17.642 20:42:55 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 44681 00:37:17.642 20:42:55 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 44681 ']' 00:37:17.642 20:42:55 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:17.642 20:42:55 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:17.642 20:42:55 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:17.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:17.642 20:42:55 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:17.642 20:42:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:17.642 [2024-07-15 20:42:56.007345] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:37:17.642 [2024-07-15 20:42:56.007437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44681 ] 00:37:17.642 EAL: No free 2048 kB hugepages reported on node 1 00:37:17.642 [2024-07-15 20:42:56.068350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.900 [2024-07-15 20:42:56.171855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.157 20:42:56 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:18.157 20:42:56 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:18.157 20:42:56 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:18.157 20:42:56 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.157 20:42:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:18.157 [2024-07-15 20:42:56.434861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:18.157 null0 00:37:18.157 [2024-07-15 20:42:56.466897] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:18.157 [2024-07-15 20:42:56.467382] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:18.157 20:42:56 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.157 20:42:56 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:18.157 607693776 00:37:18.158 20:42:56 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:18.158 394515257 00:37:18.158 20:42:56 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=44739 00:37:18.158 20:42:56 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:18.158 20:42:56 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 44739 /var/tmp/bperf.sock 00:37:18.158 20:42:56 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 44739 ']' 00:37:18.158 20:42:56 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:18.158 20:42:56 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:18.158 20:42:56 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:18.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:18.158 20:42:56 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:18.158 20:42:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:18.158 [2024-07-15 20:42:56.531373] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 22.11.4 initialization... 00:37:18.158 [2024-07-15 20:42:56.531438] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44739 ] 00:37:18.158 EAL: No free 2048 kB hugepages reported on node 1 00:37:18.158 [2024-07-15 20:42:56.592776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.158 [2024-07-15 20:42:56.684553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.415 20:42:56 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:18.415 20:42:56 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:18.415 20:42:56 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:18.415 20:42:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:18.672 20:42:56 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:18.672 20:42:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:18.930 20:42:57 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:18.930 20:42:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:19.188 [2024-07-15 20:42:57.532828] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:19.188 nvme0n1 00:37:19.188 20:42:57 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:19.188 20:42:57 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:19.188 20:42:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:19.188 20:42:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:19.188 20:42:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:19.188 20:42:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.446 20:42:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:19.446 20:42:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:19.446 20:42:57 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:19.446 20:42:57 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:19.446 20:42:57 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.446 20:42:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.446 20:42:57 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:19.704 20:42:58 keyring_linux -- keyring/linux.sh@25 -- # sn=607693776 00:37:19.704 20:42:58 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:19.704 20:42:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:19.704 20:42:58 keyring_linux -- keyring/linux.sh@26 -- # [[ 607693776 == \6\0\7\6\9\3\7\7\6 ]] 00:37:19.704 20:42:58 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 607693776 00:37:19.704 20:42:58 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:19.704 20:42:58 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:19.704 Running I/O for 1 seconds... 00:37:21.099 00:37:21.099 Latency(us) 00:37:21.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.099 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:21.099 nvme0n1 : 1.03 3596.20 14.05 0.00 0.00 35160.60 15049.01 52040.44 00:37:21.099 =================================================================================================================== 00:37:21.099 Total : 3596.20 14.05 0.00 0.00 35160.60 15049.01 52040.44 00:37:21.099 0 00:37:21.099 20:42:59 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:21.099 20:42:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:21.099 20:42:59 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:21.099 20:42:59 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:21.099 20:42:59 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:21.099 20:42:59 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:21.099 20:42:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.099 20:42:59 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:21.356 20:42:59 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:21.356 20:42:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:21.356 20:42:59 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:21.356 20:42:59 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:21.356 20:42:59 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:21.356 20:42:59 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:21.356 20:42:59 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:21.356 20:42:59 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:21.356 20:42:59 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:21.356 20:42:59 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:21.356 20:42:59 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:21.356 20:42:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:21.614 [2024-07-15 20:43:00.019293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f9680 (107): Transport endpoint is not connected 00:37:21.614 [2024-07-15 20:43:00.019308] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:21.614 [2024-07-15 20:43:00.020281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f9680 (9): Bad file descriptor 00:37:21.614 [2024-07-15 20:43:00.021280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:21.614 [2024-07-15 20:43:00.021303] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:21.614 [2024-07-15 20:43:00.021317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:21.614 request: 00:37:21.614 { 00:37:21.614 "name": "nvme0", 00:37:21.614 "trtype": "tcp", 00:37:21.614 "traddr": "127.0.0.1", 00:37:21.614 "adrfam": "ipv4", 00:37:21.614 "trsvcid": "4420", 00:37:21.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:21.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:21.614 "prchk_reftag": false, 00:37:21.614 "prchk_guard": false, 00:37:21.614 "hdgst": false, 00:37:21.614 "ddgst": false, 00:37:21.614 "psk": ":spdk-test:key1", 00:37:21.614 "method": "bdev_nvme_attach_controller", 00:37:21.614 "req_id": 1 00:37:21.614 } 00:37:21.614 Got JSON-RPC error response 00:37:21.614 response: 00:37:21.615 { 00:37:21.615 "code": -5, 00:37:21.615 "message": "Input/output error" 00:37:21.615 } 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@33 -- # sn=607693776 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 607693776 00:37:21.615 1 links removed 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@33 -- # sn=394515257 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 394515257 00:37:21.615 1 links removed 00:37:21.615 20:43:00 keyring_linux -- keyring/linux.sh@41 -- # killprocess 44739 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 44739 ']' 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 44739 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 44739 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 44739' 00:37:21.615 killing process with pid 44739 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@967 -- # kill 44739 00:37:21.615 Received shutdown signal, test time was about 1.000000 seconds 00:37:21.615 00:37:21.615 Latency(us) 00:37:21.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.615 =================================================================================================================== 00:37:21.615 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:21.615 20:43:00 keyring_linux -- common/autotest_common.sh@972 -- # wait 44739 00:37:21.873 20:43:00 keyring_linux -- keyring/linux.sh@42 -- # killprocess 44681 00:37:21.873 20:43:00 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 44681 ']' 00:37:21.873 20:43:00 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 44681 00:37:21.873 20:43:00 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:21.873 20:43:00 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:21.873 20:43:00 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 44681 00:37:21.873 20:43:00 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:21.873 20:43:00 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:21.873 20:43:00 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 44681' 00:37:21.873 killing process with pid 44681 00:37:21.873 20:43:00 keyring_linux -- common/autotest_common.sh@967 -- # kill 44681 00:37:21.873 20:43:00 keyring_linux -- common/autotest_common.sh@972 -- # wait 44681 00:37:22.438 00:37:22.438 real 0m4.910s 00:37:22.438 user 0m9.141s 00:37:22.438 sys 0m1.507s 00:37:22.438 20:43:00 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:22.438 20:43:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:22.438 ************************************ 00:37:22.438 END TEST keyring_linux 00:37:22.438 ************************************ 00:37:22.438 20:43:00 -- common/autotest_common.sh@1142 -- # return 0 00:37:22.438 20:43:00 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:22.438 20:43:00 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:22.438 20:43:00 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:22.438 20:43:00 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:22.438 20:43:00 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:22.438 20:43:00 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:22.438 20:43:00 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:22.438 20:43:00 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:22.438 20:43:00 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:22.438 20:43:00 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:22.438 20:43:00 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:22.438 20:43:00 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:22.438 20:43:00 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:22.438 20:43:00 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:22.438 20:43:00 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:22.438 20:43:00 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:22.438 20:43:00 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:22.438 20:43:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:22.438 20:43:00 -- common/autotest_common.sh@10 -- # set +x 00:37:22.438 20:43:00 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:22.439 20:43:00 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:22.439 20:43:00 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:22.439 20:43:00 -- common/autotest_common.sh@10 -- # set +x 00:37:24.336 INFO: APP EXITING 00:37:24.336 INFO: killing all VMs 00:37:24.336 INFO: killing vhost app 00:37:24.336 INFO: EXIT DONE 00:37:25.269 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:25.269 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:25.269 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:25.269 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:25.269 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:25.269 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:25.269 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:25.269 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:25.269 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:25.269 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:25.527 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:25.527 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:25.527 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:25.527 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:25.527 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:25.527 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:25.527 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:26.462 Cleaning 00:37:26.462 Removing: /var/run/dpdk/spdk0/config 00:37:26.462 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:26.462 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:26.462 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:26.462 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:26.463 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:26.463 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:26.463 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:26.463 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:26.463 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:26.463 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:26.463 Removing: /var/run/dpdk/spdk1/config 00:37:26.463 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:26.463 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:26.463 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:26.463 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:26.463 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:26.463 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:26.463 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:26.463 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:26.463 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:26.463 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:26.463 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:26.463 Removing: /var/run/dpdk/spdk2/config 00:37:26.463 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:26.463 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:26.463 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:26.463 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:26.463 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:26.463 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:26.463 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:26.463 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:26.721 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:26.721 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:26.721 Removing: /var/run/dpdk/spdk3/config 00:37:26.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:26.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:26.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:26.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:26.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:26.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:26.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:26.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:26.721 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:26.721 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:26.721 Removing: /var/run/dpdk/spdk4/config 00:37:26.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:26.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:26.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:26.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:26.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:26.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:26.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:26.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:26.721 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:26.721 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:26.721 Removing: /dev/shm/bdev_svc_trace.1 00:37:26.721 Removing: /dev/shm/nvmf_trace.0 00:37:26.721 Removing: /dev/shm/spdk_tgt_trace.pid3917717 00:37:26.721 Removing: /var/run/dpdk/spdk0 00:37:26.721 Removing: /var/run/dpdk/spdk1 00:37:26.721 Removing: /var/run/dpdk/spdk2 00:37:26.721 Removing: /var/run/dpdk/spdk3 00:37:26.721 Removing: /var/run/dpdk/spdk4 00:37:26.721 Removing: /var/run/dpdk/spdk_pid12052 00:37:26.721 Removing: /var/run/dpdk/spdk_pid12195 00:37:26.721 Removing: /var/run/dpdk/spdk_pid16009 00:37:26.721 Removing: /var/run/dpdk/spdk_pid16184 00:37:26.721 Removing: /var/run/dpdk/spdk_pid17788 00:37:26.721 Removing: /var/run/dpdk/spdk_pid22689 00:37:26.721 Removing: /var/run/dpdk/spdk_pid22696 00:37:26.721 Removing: /var/run/dpdk/spdk_pid25589 00:37:26.721 Removing: /var/run/dpdk/spdk_pid26986 00:37:26.721 Removing: /var/run/dpdk/spdk_pid28384 00:37:26.721 Removing: /var/run/dpdk/spdk_pid29245 00:37:26.722 Removing: /var/run/dpdk/spdk_pid30646 00:37:26.722 Removing: /var/run/dpdk/spdk_pid31520 00:37:26.722 Removing: /var/run/dpdk/spdk_pid37473 00:37:26.722 Removing: /var/run/dpdk/spdk_pid37796 00:37:26.722 Removing: /var/run/dpdk/spdk_pid38189 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3916172 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3916903 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3917717 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3918156 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3918844 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3918990 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3919702 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3919713 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3919950 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3921235 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3922187 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3922495 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3922682 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3922882 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3923072 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3923233 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3923423 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3923679 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3923996 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3926858 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3927024 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3927193 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3927313 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3927626 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3927695 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3928056 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3928064 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3928347 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3928361 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3928527 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3928654 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3929019 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3929179 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3929377 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3929541 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3929591 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3929751 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3929916 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3930141 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3930335 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3930501 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3930655 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3930929 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3931082 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3931248 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3931430 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3931668 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3931829 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3931988 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3932259 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3932413 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3932574 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3932733 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3933010 00:37:26.722 Removing: /var/run/dpdk/spdk_pid3933166 00:37:26.980 Removing: /var/run/dpdk/spdk_pid3933328 00:37:26.980 Removing: /var/run/dpdk/spdk_pid3933560 00:37:26.980 Removing: /var/run/dpdk/spdk_pid3933663 00:37:26.980 Removing: /var/run/dpdk/spdk_pid3933877 00:37:26.980 Removing: /var/run/dpdk/spdk_pid3936013 00:37:26.980 Removing: /var/run/dpdk/spdk_pid39739 00:37:26.980 Removing: /var/run/dpdk/spdk_pid3989757 00:37:26.980 Removing: /var/run/dpdk/spdk_pid3992365 00:37:26.980 Removing: /var/run/dpdk/spdk_pid3999197 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4002367 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4004745 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4005259 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4009147 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4013052 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4013054 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4013595 00:37:26.980 Removing: /var/run/dpdk/spdk_pid40139 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4014301 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4015019 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4015917 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4015924 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4016105 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4016203 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4016205 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4016861 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4017513 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4018072 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4018572 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4018587 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4018747 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4019607 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4020394 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4025670 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4025939 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4028442 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4032136 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4034304 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4040561 00:37:26.980 Removing: /var/run/dpdk/spdk_pid40418 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4045744 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4047528 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4048332 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4058395 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4060619 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4085781 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4088556 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4089730 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4091015 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4091070 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4091196 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4091331 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4091650 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4092960 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4093685 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4093995 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4095609 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4096032 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4096590 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4098982 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4102235 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4106396 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4129243 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4132502 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4136387 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4137327 00:37:26.980 Removing: /var/run/dpdk/spdk_pid4138416 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4140834 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4143181 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4147264 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4147266 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4150028 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4150288 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4150424 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4150690 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4150698 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4151770 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4153057 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4154236 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4155433 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4156607 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4157787 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4161587 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4161925 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4163320 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4164677 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4168264 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4170241 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4173532 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4176843 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4183044 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4187297 00:37:26.981 Removing: /var/run/dpdk/spdk_pid4187368 00:37:26.981 Removing: /var/run/dpdk/spdk_pid42857 00:37:26.981 Removing: /var/run/dpdk/spdk_pid42862 00:37:26.981 Removing: /var/run/dpdk/spdk_pid44318 00:37:26.981 Removing: /var/run/dpdk/spdk_pid44681 00:37:26.981 Removing: /var/run/dpdk/spdk_pid44739 00:37:26.981 Removing: /var/run/dpdk/spdk_pid6420 00:37:26.981 Removing: /var/run/dpdk/spdk_pid6833 00:37:26.981 Removing: /var/run/dpdk/spdk_pid7236 00:37:26.981 Removing: /var/run/dpdk/spdk_pid7676 00:37:26.981 Removing: /var/run/dpdk/spdk_pid8225 00:37:26.981 Removing: /var/run/dpdk/spdk_pid8747 00:37:26.981 Removing: /var/run/dpdk/spdk_pid9152 00:37:26.981 Removing: /var/run/dpdk/spdk_pid9564 00:37:27.239 Clean 00:37:27.239 20:43:05 -- common/autotest_common.sh@1451 -- # return 0 00:37:27.239 20:43:05 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:27.239 20:43:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:27.239 20:43:05 -- common/autotest_common.sh@10 -- # set +x 00:37:27.239 20:43:05 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:27.239 20:43:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:27.239 20:43:05 -- common/autotest_common.sh@10 -- # set +x 00:37:27.239 20:43:05 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:27.239 20:43:05 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:27.239 20:43:05 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:27.239 20:43:05 -- spdk/autotest.sh@391 -- # hash lcov 00:37:27.239 20:43:05 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:27.239 20:43:05 -- spdk/autotest.sh@393 -- # hostname 00:37:27.239 20:43:05 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:27.496 geninfo: WARNING: invalid characters removed from testname! 00:37:59.559 20:43:33 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:59.559 20:43:37 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:02.088 20:43:40 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:05.367 20:43:43 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:07.895 20:43:46 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:11.260 20:43:49 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:13.797 20:43:51 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:13.797 20:43:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:13.797 20:43:52 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:13.797 20:43:52 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:13.797 20:43:52 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:13.797 20:43:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.797 20:43:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.797 20:43:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.797 20:43:52 -- paths/export.sh@5 -- $ export PATH 00:38:13.797 20:43:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.797 20:43:52 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:13.797 20:43:52 -- common/autobuild_common.sh@444 -- $ date +%s 00:38:13.797 20:43:52 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721069032.XXXXXX 00:38:13.797 20:43:52 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721069032.rCQBFE 00:38:13.797 20:43:52 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:38:13.797 20:43:52 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:38:13.797 20:43:52 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:13.797 20:43:52 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:13.797 20:43:52 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:13.797 20:43:52 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:13.797 20:43:52 -- common/autobuild_common.sh@460 -- $ get_config_params 00:38:13.797 20:43:52 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:13.797 20:43:52 -- common/autotest_common.sh@10 -- $ set +x 00:38:13.797 20:43:52 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:13.797 20:43:52 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:38:13.797 20:43:52 -- pm/common@17 -- $ local monitor 00:38:13.797 20:43:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:13.797 20:43:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:13.797 20:43:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:13.797 20:43:52 -- pm/common@21 -- $ date +%s 00:38:13.797 20:43:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:13.797 20:43:52 -- pm/common@21 -- $ date +%s 00:38:13.797 20:43:52 -- pm/common@25 -- $ sleep 1 00:38:13.797 20:43:52 -- pm/common@21 -- $ date +%s 00:38:13.797 20:43:52 -- pm/common@21 -- $ date +%s 00:38:13.797 20:43:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069032 00:38:13.797 20:43:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069032 00:38:13.797 20:43:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069032 00:38:13.798 20:43:52 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069032 00:38:13.798 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069032_collect-vmstat.pm.log 00:38:13.798 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069032_collect-cpu-load.pm.log 00:38:13.798 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069032_collect-cpu-temp.pm.log 00:38:13.798 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069032_collect-bmc-pm.bmc.pm.log 00:38:14.730 20:43:53 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:38:14.730 20:43:53 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:14.730 20:43:53 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:14.730 20:43:53 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:14.730 20:43:53 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:14.730 20:43:53 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:14.730 20:43:53 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:14.730 20:43:53 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:14.730 20:43:53 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:14.730 20:43:53 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:14.730 20:43:53 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:14.730 20:43:53 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:14.730 20:43:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:14.730 20:43:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:14.730 20:43:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.730 20:43:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:14.730 20:43:53 -- pm/common@44 -- $ pid=55923 00:38:14.730 20:43:53 -- pm/common@50 -- $ kill -TERM 55923 00:38:14.730 20:43:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.730 20:43:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:14.730 20:43:53 -- pm/common@44 -- $ pid=55925 00:38:14.730 20:43:53 -- pm/common@50 -- $ kill -TERM 55925 00:38:14.730 20:43:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.730 20:43:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:14.730 20:43:53 -- pm/common@44 -- $ pid=55927 00:38:14.730 20:43:53 -- pm/common@50 -- $ kill -TERM 55927 00:38:14.730 20:43:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.730 20:43:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:14.730 20:43:53 -- pm/common@44 -- $ pid=55957 00:38:14.730 20:43:53 -- pm/common@50 -- $ sudo -E kill -TERM 55957 00:38:14.730 + [[ -n 3813003 ]] 00:38:14.730 + sudo kill 3813003 00:38:14.737 [Pipeline] } 00:38:14.752 [Pipeline] // stage 00:38:14.757 [Pipeline] } 00:38:14.775 [Pipeline] // timeout 00:38:14.780 [Pipeline] } 00:38:14.798 [Pipeline] // catchError 00:38:14.803 [Pipeline] } 00:38:14.817 [Pipeline] // wrap 00:38:14.822 [Pipeline] } 00:38:14.836 [Pipeline] // catchError 00:38:14.844 [Pipeline] stage 00:38:14.846 [Pipeline] { (Epilogue) 00:38:14.860 [Pipeline] catchError 00:38:14.862 [Pipeline] { 00:38:14.876 [Pipeline] echo 00:38:14.877 Cleanup processes 00:38:14.884 [Pipeline] sh 00:38:15.163 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:15.164 56079 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:15.164 56187 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:15.178 [Pipeline] sh 00:38:15.460 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:15.460 ++ grep -v 'sudo pgrep' 00:38:15.460 ++ awk '{print $1}' 00:38:15.460 + sudo kill -9 56079 00:38:15.472 [Pipeline] sh 00:38:15.754 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:25.725 [Pipeline] sh 00:38:26.021 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:26.021 Artifacts sizes are good 00:38:26.034 [Pipeline] archiveArtifacts 00:38:26.041 Archiving artifacts 00:38:26.262 [Pipeline] sh 00:38:26.538 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:26.551 [Pipeline] cleanWs 00:38:26.560 [WS-CLEANUP] Deleting project workspace... 00:38:26.560 [WS-CLEANUP] Deferred wipeout is used... 00:38:26.567 [WS-CLEANUP] done 00:38:26.568 [Pipeline] } 00:38:26.583 [Pipeline] // catchError 00:38:26.593 [Pipeline] sh 00:38:26.868 + logger -p user.info -t JENKINS-CI 00:38:26.876 [Pipeline] } 00:38:26.890 [Pipeline] // stage 00:38:26.894 [Pipeline] } 00:38:26.910 [Pipeline] // node 00:38:26.915 [Pipeline] End of Pipeline 00:38:26.953 Finished: SUCCESS